• jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    4 months ago

    This has been around for a while in research papers. Getting people’s pulse rate, and even blood pressure from videos.

    Other things you can get from videos, electrical interference to determine which power grid somebody is using. Noises in the background can be mapped as well. So uploading a video deanonymizes you quite well, for properly motivated investigator.

    In the escalating war against deepfakes however it will just be part of the arms race, and new deepfakes will now include those fluctuations.

      • Etterra@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        4 months ago

        Or, alternatively, just showing up to do stuff in person. Of course that’s not always feasible but still.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          We’re starting to see it in some cameras, mostly for still photography, but I don’t see why the basic concept wouldn’t extend to video files, too. Leica released a camera last year that signs the photo, including the timestamp and location data, and Canon, Nikon, Sony, Adobe, and Getty have various implementations of the technique.

          Once the major photo software editing workflows support it, we’ll probably see some kind of chain of custody authentication support from camera to publication.

          Of course, that doesn’t prevent fakes in the sense of staged productions, but the timestamp and location data would go a long way.

          • Laser@feddit.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 months ago

            But then what? So you have a camera signing its files and we pretend that extraction of the secret key is impossible (which it probably isn’t). You load the file into your editing program because usually, the source files are processed further. You create a derivative of the signed file and there’s no connection to the old signature anymore, so this would only make sense if you provide the original file for verification purposes, which most people won’t do.

            I guess it’s better than nothing but it will require more infrastructure to turn it into something usable, or of this was only used in important situations where manual checking isn’t an issue, like a newspaper posting a picture but keeping the original to verify the authenticity.

            • GamingChairModel@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              so this would only make sense if you provide the original file for verification purposes

              Yes, that’s exactly what I’m imagining. You’re keeping receipts for after-the-fact proof, in case it needs to be audited. If you have a newsworthy photograph, or evidence that needs to be presented to the court system, this could provide an important method of proving an untampered original.

              Maybe a central trusted authority can verify the signatures and generate a thumbnail for verification (take the signed photo and put it through an established, open source, destructive algorithm to punch out a 200x300 lossy compressed jpeg that at least confirms that the approximate photo was taken at that time and place, but without sufficient resolution/bit depth to compete with the original author on postprocessing.

            • EngineerGaming@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Also at least last time I heard about these cameras, only specific proprietary editors (like Adobe) were compatible, which introduces all sorts of other problems.

  • AnAmericanPotato@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    4 months ago

    Honestly, I don’t find this very creepy. This is information you are already putting out there for everyone to see. If I post a video of myself speaking, I am not concerned about people seeing how my skin vibrates in that video.

    As video generation tools become more advanced, we will need better algorithms to validate videos. The bar for “fooling the vast majority of humans” is much, much lower than the bar for “being literally indistinguishable from a real video”. The main problem I see is that it’s going to be a cat-and-mouse game, and I don’t think any method you publish will remain valid for very long in practice. The same method will be used to improve the next version of video generators.

    Also, lots of real videos use post-processing that might wash out some of the details they are looking for. Video producers might re-record lines so they don’t perfectly match the video to begin with. It’s been a long time since I used a Samsung phone, but on my old S6, I remember that it always had a beauty filter applied to the selfie camera that made me me look like a creepy porcelain doll. I could probably make a deepfake of myself that looks more “real” than those real videos and photos.