• chevron_right

      The 2023 Wrap-up

      pubsub.slavino.sk / hackerfactor · Saturday, 30 December - 22:31 edit · 5 minutes

    Every year I try to focus on a theme. For example:
    • In 2017, it was Cyber Goat . I had this crazy idea that a small company might be able to make large organizations move. (It was an incredible success.)
    • In 2019, I focused on a couple of themes : fraud and detection, practical ways to deter bots, and how to setup a basic honeypot.
    • In 2020, I heavily focused on Tor . Sadly, there was nothing I could do to make them address some of their huge security holes. It's been over 4 years and they have not addressed any of the issues. (Then again, some of these have been known problems for decades and the Tor Project just ignores them, even when there are viable solutions. Seriously: don't use Tor if you need real online anonymity.)
    This year, I've been heavily focused on automation. I don't have unlimited resources or personnel, so I want to make the most of what I've got.

    Automation for security and administration tasks isn't a new concept. Everyone does it to some degree. Firewalls are configured and deployed, cron jobs automate backups, etc.

    My crazy twist applies this concept to tasks that are usually not automated. I've been developing and using automated security solutions for over 30 years. This year, I publicly disclosed many of the solutions that I use. For example:
    • In No Apps , I mentioned how I use test images to identify the underlying graphics libraries that applications use. Some libraries change images during their rendering process, and others have known vulnerabilities. This quick semi-automated test allows me to learn more about the software that I use.
    • I covered how I use fail2ban and strict mail server rules to mitigate junk mail and spam. On any typical day, my email box only receives 2-4 spam messages. The remaining hundreds of emails are caught by the mail server's filter rules. I configured it once and occasionally (maybe 30 minutes every few months) make little adjustments. For the most part, it's fully automated and I don't have to think about it.
    • I mentioned the various honeypots I use and some of the findings . I am automatically alerted when a change in attack volume happens, and I can tell if it's specific to a single server or just a change in the background noise level.

      Knowing the background level is really important. I'm less concerned if everyone sees the same attacks. Directed attacks get my attention. I also use it for business intelligence ; I often know what exploits the attackers are looking for before any official announcement.
    • Based on my work with honeypots, I released a four-part series on No-NOC Networking . This series describes some simple solutions to dramatically reduce the number of scans and attacks that your typical online service receives. And best of all: these can be implemented without any full-time system administrators or network operations center (NOC). These changes require no extra costs for better security.
    • Along with descriptions of my honeypots, I also released Nuzzle as an open source packet sniffer that can be used as an IDS/IPS system. Many of my own servers use Nuzzle. Between Nuzzle and some simple kernel configurations, you can easily see a 99% drop in the number of attacks. (That's not "stopping 99% of attacks", that's "99% fewer attacks that need to be stopped.")
    • Besides software automation, I did a lot of hardware hacking by making some simple internet-of-things (IoT) devices. These ranged from a basic temperature sensor (for montioring my server cabinet) to different approaches for checking on the health of elderly friends . I even automated the tracking of my AirTag .
    • Through the use of automated detectors, I was able to rapidly detect and block uploads when a new kind of prohibited content went viral.
    All of this simplifies my system management needs. Rather than spending a lot of time and money on yet another partial solution, I use my limited resources more effectively. Fewer network attacks, less spam, and better ways to monitor the things that I think are important.

    Bad Automation

    I typically view automation as a good thing. I often tell my peers: if it can be automated, then it should be automated. However, sometimes automation isn't beneficial. A good automated system should be reliable and accurate. Unfortunately, some recently deployed automated solutions are neither.

    The growing and unchecked use of AI is a really good example here. Whether it's AI-generated art or AI-written text , I'm seeing more misuse and problems than benefits. Personally, I think the punishments for using unchecked AI text, like ChatGPT, for official reports should be more severe than public criticism and the occasional sanction .

    There are also problems with how these AI systems are being trained. Since deep networks can often memorize the training data, the AI generated results may appear as copyright infringement. Personally, I'm eagerly waiting to hear the outcome of Getty's lawsuit against Stability AI for stealing pictures for training and generating minor variants as uncited "AI generated imagery". And the New York Times recently sued OpenAI and Microsoft for the unauthorized use of copyrighted news articles.

    While I use many different types of automated systems (including lots of different kinds of AI), it takes effort to create good automated rules. Bad automatons lead to bad user experiences . (Just ask the Etsy forums about bad automated blocking rules, and the US Post Office really needs to consult with a human-computer interaction (HCI) expert about usability for their automated kiosks.)

    And then there's C2PA. I don't think I'm done writing about them . I rarely use snake oil to describe any kind of fake security solution or bad automated evaluation system, but this time I think it is well deserved.

    Next year?

    Long-time readers of my blog will know that I am often rough, brutally honest, and hyper-focus on the negative aspects. In 2023, I tried to write more uplifting and less cranky blog entries. (Other than C2PA or my criticisms about AI, I think I did pretty well compared to previous years. And with AI and C2PA, I do like the concepts, but I take issue with the execution and how they generate more problems than they attempt to solve.)

    In 2024, I'm going to try to have a little more fun with this blog. I'm hoping to write more about things that make me happy (which usually includes projects that have worked out better than expected). For example, rather than focusing on how deep network AI causes problem, I might try to experiment with some good uses for it. When it comes to authenticity and provenance, the C2PA management challenged me to come up with something better. I think I've got a partial solution that works; when I get time to write it up and create a working example, I'm certain to blog about it. (A working partial solution is definitely an improvement when compared to the defective solution from C2PA.) And who knows what else I'll do? Some of the best projects have been those surprises along the way.

    Značky: #Privacy, #Network, #Security, #Forensics, #[Other]

    • chevron_right

      C2PA's Worst Case Scenario

      pubsub.slavino.sk / hackerfactor · Monday, 18 December - 21:06 edit · 20 minutes

    A month ago, I wrote a very detailed and critical review of the Coalition for Content Provenance and Authenticity (C2PA) specification for validating media metadata. My basic conclusion was that C2PA introduces more problems than it tries to solve and it didn't solve any problems.

    Although a few people have left comments on the blog, I have had a near daily stream of new contacts writing in. Some are curious developers, while others are people tasked with evaluating or implementing C2PA. The comments have come from small organizations, Fortune-500 companies, and even other CAI and C2PA members. (Since some of them mentioned not having permission to publicly comment, I'm treating them all as anonymous sources.)

    Despite having different areas of interest, they all seem to ask variations of the same basic questions. (It's so frequent, that I've often been cut-and-pasting text into replies.) In this blog entry, I'm going to cover the most common questions:
    1. Are there any updates?
    2. How bad is C2PA's trust model?
    3. Isn't C2PA's tamper detection good enough?
    4. Is C2PA really worse than other types of metadata?
    5. What's the worst-case scenario?
    For people interested in problem solving with forensic tools, I have a challenge in the 5th item.

    Question 1: Are there any updates?

    There have been a couple of updates to C2PA since my blog came out. Some appear to be unrelated to me, and some are directly related.
    • C2PA released version 1.4 of their specifications. Unfortunately, it doesn't address any of the problems with the trust model. (Since this happened a few days after my blog, it's an unrelated update.)
    • Some of the bugs submitted to various C2PA github projects suddenly became actively addressed by the C2PA/CAI developers. (Including some submitted by me.) While this is progress, none have been fixed yet. In most cases, the bugs seem to have been copied from the public github locations to Adobe's private bug management system. (This means that C2PA development is really only happening inside Adobe.) Unfortunately, none of these bugs address C2PA's trust model.
    • Last week I had a productive discussion with people representing C2PA and CAI. I'm not going to disclose the discussion topics, except to say that there were some things we agreed on, and many things we did not agree on.

      Excluding myself and my colleague, everyone on the call was an Adobe employee. In my previous blog , I mentioned that CAI acts like a façade to support C2PA. Between all of the key people being at Adobe and all development going through Adobe, I'm now getting the distinct impression that CAI and C2PA are both really just "all Adobe all the time." It does not appear to be the wider community that C2PA and CAI describe.
    • A few days after the conference call, Adobe made a change to CAI's Content Credentials web site. Now self-signed certificates (and some of the older non-test certificates) are flagged as untrustworthy: "This Content Credential was issued by an unknown source."
      analysis.php?id=e2d944193b24b267e20bf9c3afe994c9b9ce89ec.6558&fmt=orig
      The only way around this requires paying money to a certificate authority for a "signing certificate". (At DigiCert, this costs $289 for one year.) Switching to a fee-based requirement dramatically reduces the desirability for smaller companies and individual developers to adopt C2PA. And yet, adding in a fee requirement does nothing to deter fraud. (Fraud is a multi-billion dollar per year industry. The criminals can afford to pay a little for "authentication".)

      As an aside, "HTTPS" adoption had a similar pay-to-play limitation until Let's Encrypt began offering free certificates. Today, HTTPS is the norm because it's free. Unfortunately, those free certificates are for web domains only and not for cryptographic signatures .

    Question 2: How bad is C2PA's trust model?

    My previous evaluation identified different ways to create forgeries, but it only superficially covered the trust model.

    Back in the old days of network security, we relied on "trust", but "trust" was rapidly shown to be vulnerable and insecure. "Trust" doesn't work very well as a security mechanism; it's not even "better than nothing" security. "Trust" is used for legal justification after the fact. We trust that someone won't ignore the "No Soliciting" sign by your door. The sign doesn't stop dishonest solicitors. Rather, it establishes intent for any legal consequences that happen later.

    In networking, "trust" was replaced with " trust but verify ", which is better than nothing. Today, network security is moving from "trust but verify" to " zero trust ".

    C2PA is based heavily on the "trust" and not "trust but verify". (And definitely not "zero trust".) With C2PA:
    • We trust that the metadata accurately reflects the content. (Provenance, origin, handling, etc.) This explicitly means trusting in the honesty of the person inserting the metadata.
    • We trust that each new signer verified the previous claims.
    • We trust that a signer didn't alter the previous claims.
    • We trust that the cryptographic certificate (cert) was issued by an authoritative source.
    • We trust that the metadata and cert represents an authoritative source. While the cert allows us to validate how it was issued ("trust but verify"), it doesn't validate who it was issued to or how it is used.
    • We trust the validation tools to perform a proper validation.
    • We trust that any bad actors who violate any of these trusts will be noticed before causing any significant damage. ( Who will notice it? We trust that there is someone who notices, somewhere to report it, and someone who can do something about it. And we trust that this will happen quickly, even though it always takes years to revoke a pernicious certificate authority .)
    All of this trust is great for keeping honest people honest. However, it does nothing to deter malicious actors.

    C2PA is literally based on the honor system. However, the honor system is the definition of having no security infrastructure. (We don't need security because we trust everyone to act honorably.)

    Question 3: Isn't C2PA's tamper detection good enough?

    Putting strong cryptography around unverified data does not make the data suddenly trustworthy. (It's lipstick on a pig !)

    Other than the cryptographic signature, all of C2PA's metadata can be altered without detection. (Yes, C2PA also uses a hash, like sha256, as a secondary check, but it's trivial to recompute the hashes after making any alterations.) The cryptographic signature ensures that the data hasn't changed after signing. C2PA calls this tamper evident , while other C2PA members describe it as being tamper resistant or tamper proof . Regardless of the terminology, there are four possible scenarios for their tamper detection: unsigned, invalid, valid, and missing.
    • Case 1: Unsigned data
      This cases occurs when the C2PA metadata exists but there is no cryptographic signature. As I've mentioned, everything can be easily forged. Even though the C2PA metadata exists, you cannot explicitly trust it since you don't know who might have changed it.

      The C2PA specification permits including files as a component of another file. This unsigned case can easily happen when incorporating media that lacks C2PA metadata. In my previous blog entry , I pointed out that CAI's example with the flowery hair contains a component without a signature.

      analysis.php?id=3f509b153c8ba2f26732bc8c521a1463085b74eb.690710&fmt=orig&size=256

      What this means: If there is no cryptographic signature, then we cannot trust the data. We must use other, non-C2PA methods to perform any validation.
    • Case 2: Invalid signature or checksum
      When the C2PA metadata has an invalid checksum or signature, it explicitly means something was changed. However, you don't know what was changed, when it happened, or who did it.

      Keep in mind, this does not mean that there is intentional fraud or a malicious activity. Alterations could be innocent or unintentional. For example, importing a JPEG into the Windows Photo Gallery ( discontinued but still widely used) automatically updates the metadata. This update causes a change, making the C2PA signature invalid. The same kind of unintentional alteration can also happen when attaching a picture to an outgoing email. (As opposed to importing a picture into the iPhone Photo Library, which simply strips out C2PA metadata.)

      If the signature is invalid, then C2PA says we cannot trust the data. However, the important aspects of the data may not have changed and may still be trustworthy. We must use other, non-C2PA methods to determine the cause and perform any validation.
    • Case 3: Valid signature and checksums
      There are two different ways that a file can contain valid signatures and checksums:

      (A) While we don't know if we can trust the data, we know we can trust that it wasn't changed after being signed.
      or
      (B) The data was changed after being signed and any invalid signatures were replaced, making it valid again. In this case, the data and signatures are untrusted.

      In both cases (A and B), the signatures and checksums are valid and we cannot trust the data. Moreover, we can't distinguish unaltered (A) from intentionally altered (B). Since we cannot use C2PA for trust or tamper detection, we must use other, non-C2PA methods to perform any validation.
    • Case 4: No C2PA metadata
      This is currently the most common case. Does it mean that the picture never included C2PA metadata, or that the metadata was removed (stripped)?

      CAI's Content Credentials service addresses this problem by performing a similar picture search (perceptual search). Most of the time, they find nothing. However, even if they do find a visually similar file that contains C2PA metadata, it doesn't mean the data in the search result is trustworthy or authentic. (See Cases 1-3.)
    In every case:
    • The data is untrusted.
    • Intentional tampering cannot be detected.
    • The data must be validated through other, non-C2PA means.
    What does C2PA provide? Nothing except computational busywork. C2PA's complexity and use of buzzwords like "cryptography" and "tamper evident" makes it appear impressive, but it currently provides nothing of value.

    Question 4: Is C2PA really worse than other types of metadata?

    Given that C2PA does not validate, why do we need yet another standard for storing the exact same information?

    The metadata information provided by C2PA is typically present in other metadata fields: EXIF, IPTC, XMP, etc. However, C2PA provides the same information in an overly complicated manner, requiring 5 megs of libraries and four different complex formats to process. In contrast, EXIF and IPTC are simple formats, easy to implement, require few resources, and come in very small libraries. Even XMP (Adobe's XML-based format that sucks due to a lack of consistency) is a better choice than C2PA's JUMBF/CBOR/JSON/XML.

    A few people have written to me with comments like, "C2PA has the same long-term integrity issue as other metadata" or "Isn't C2PA as trustworthy as anything else?"

    My reply is a simple "No." C2PA is much worse:
    1. Regular metadata doesn't claim to be tamper-evident.
    2. Regular metadata doesn't use cryptographic signatures.
    3. Regular metadata doesn't have the backing of tech companies like Adobe, Microsoft, and Intel that give the false impression of legitimacy.
    Other types of metadata (EXIF, IPTC, etc.) can be altered and should be validated though other means. C2PA gives the false impression that you don't need to validate the information because the cryptographic signatures appear valid.

    Even the converse is problematic. Because the metadata can be altered without malicious intent (see the previous Case 2), an invalid C2PA signature does not mean the visual content, GPS, timestamps, or other metadata is invalid or altered. Everything else could be legitimate even with an invalid C2PA signature.

    Unlike other types of metadata, C2PA's cryptographic signatures act as a red herring . Regardless of whether the signature is valid, invalid, or missing, forensic investigators should ignore C2PA's signatures since they tell nothing about the data's validity, authenticity, and provenance.

    Question 5: What's the worst-case scenario?

    This is the most common question I've received. I usually just explain the problem. But for a few people, I've manufactured examples of the worst-case scenario for the questioner to evaluate. In each of these instances, I used the questioner's own personal information to really drive home the point.

    Since lots of people are reading this blog entry, I'm going to use Shantanu Narayen as the fictional example of the worst-case scenario. Narayen is the current chair and CEO of Adobe; his personal information in this example comes from his Wikipedia page. (I'm not doxing him.)

    In my opinion, the worst-case scenario is when the data is used to frame someone for a serious crime, like child pornography. Here's the completely fictitious scenario:

    Last month, Shantanu Narayen got into an online argument with a very vindictive person. The vindictive person acquired a bunch of child pornography and used C2PA to attribute the pictures to Narayen. The pictures were then distributed around the internet.

    It doesn't take long for the pictures to be reported to the National Center for Missing and Exploited Children (NCMEC). NCMEC sees the attribution to Narayen and immediately passes the information to the FBI and California's Internet Crimes Against Children (ICAC) task force. A short while later, the police knock on Narayen's door.

    I'm not going to include a real picture of child porn for this demonstration of a fictional situation. (That would be gross, illegal, and irrelevant to this example.) Instead, I used a teddy bear picture (because it's "bearly legal").

    For this evaluation, ignore the visual portion and assume the picture shows illegal activity. Here's one of those forged pictures!

    analysis.php?id=04857686607b05b9fe3efef11fa6a11cd68e51df.306924&fmt=orig&size=256

    Now, you get to play the role of the investigator: Prove Narayen didn't do it.

    You'll probably want to:
    • Download the image. (Save it as 'bearlylegal.jpeg'.) To make sure it wasn't modified between my server and your evaluation, the file is 306,924 bytes and the SHA1 checksum is 04857686607b05b9fe3efef11fa6a11cd68e51df.
    • Evaluate it using whatever metadata viewers you have available.

      • If you don't have a starting place, then you can use my own online forensic services, like FotoForensics and Hintfo . But don't feel like you need to use my tools.
      • From the command line, try exiftool or exiv2 .
      • Most graphical applications have built-in metadata viewers, including Mac's Preview, Windows file properties ("Details" tab), Gimp, and Photoshop. (Just be careful to not hit 'save' or make any changes to the file's metadata.)

    • Use Adobe's command-line c2patool to evaluate the C2PA metadata. With c2patool, you can view what sections are valid and the x.509 certificate contents:
      c2patool -d bearlylegal.jpeg
      c2patool --certs bearlylegal.jpeg | openssl x509 -text | less

    • View it at Adobe/CAI's Content Credentials online service. While this web service currently isn't as informative as the command-line c2patool, it does provide information about the file's C2PA metadata.
    If you try solving this problem, either with software or as a conceptual exercise, I'd love to hear your results! I hope people include information like:
    • What's your background? Inquisitive amateur, student, software engineer, law enforcement, legal, professional investigator, or something else?
    • What tools did you use?
    • What were some of your observations?
    • What other issues should we consider?
    • Do you think you could prove innocence or reasonable doubt?
    • Let's turn the problem around: If you were working for the prosecution, do you think you could get a conviction?
    I look forward to hearing how people did!

    [Spoiler Alert]

    * If you want to evaluate this problem on your own, come back to this section later.

    If you go through these various tools, you'll see:
    • The metadata says it's from a Leica camera and is explicitly associated with Narayen. (For this forgery, the metadata looks authentic because I copied it directly from a real Leica photo before changing the dates and attribution.) The GPS coordinates identify Adobe's headquarters where Narayen works.
    • The cert signer's information looks like a Leica certificate (complete with Leica's correct German address) because I copied the cert's subject information from a real Leica cert.
    • The C2PA checksums and signatures are valid. Programs like c2patool do not report any problems. The only issue (introduced after an update a few days ago) comes from the Content Credentials web site. That site says the cert is from an unknown source. ("Unknown" is not the same as "untrusted".) If the vindictive person wanted to pay $289 for a trusted signing cert, then even that warning could be bypassed. Every C2PA validation tool says the signatures look correct; nothing suspicious. (They were forged using c2patool, so they really are valid.)
    • The picture's timestamps predate the disagreement between Narayen and the vindictive person. (Even if you can show it is forged, you don't know who forged it or when. The vindictive person is effectively anonymous.)
    According to the C2PA metadata, this picture came from Narayen. The picture has valid authentication and provenance information.

    For reasonable doubt, you'll need show that C2PA's metadata can be easily forged and the metadata is unreliable. (If Narayen's attorneys are smart, they'll reference the Nov 28 CAI webinar on YouTube (from 16:37 to 17:32) where DataTrails explicitly demonstrates how C2PA is easy to forge with a few simple clicks. DataTrail's solution? They authenticate using non-C2PA technology. This shows that other experts also realize that C2PA doesn't work for validation, tamper detection, establishing provenance, or providing authentication.)

    To prove his innocence, you'll need to prove it is a forgery. (In this case, I intentionally backdated the metadata to a time before Leica supported C2PA. However, if the vindictive person didn't make that kind of error, then there's no easy way to detect the forgery.)

    Worst-Case Results

    Given all of these findings, there are a few other things to consider:
    • Law enforcement, attorneys, and the courts are not very technical. If the file's metadata says it's his name and it has a valid signature, then it's valid until proven otherwise. Remember: digital signatures are legal signatures in a court of law; they are accepted as real until you can prove tampering. (And saying "I didn't do that" doesn't prove tampering.)
    • Narayen can say "that's not my picture!" While Leica may be able to verify that the cert isn't legitimate, Leica is in Germany and Narayen is in the United States. That makes serving a subpoena really difficult. In general, law enforcement (and everyone else who isn't Leica) cannot verify this. Also, Leica is a big company and can have multiple certs. Maybe they just don't remember creating this one or maybe it was part of a limited beta test.
    • C2PA's trust model assumes that a forged certificate will be eventually noticed. However, Narayen doesn't have months or years to wait for the forged certificate to be reused with some other victim. And that's assuming it is reused; it doesn't have to ever be reused.
    • You can try to explain that the cert only validates that the data existed and hasn't been changed since signing. But the courts see the words "authentication" and "provenance" and "certified" and "verified" and "validated". The evidence clearly says it is attributed to Narayen.
    • You might see flaws in the authenticated forgery. The prosecution will claim that the flaws are evidence that Narayen tried and failed to hide his identity. (As many people in the legal system have said, " We don't catch the smart ones. ")
    • While the C2PA tools don't identify any problems, you might notice problems if you use other tools. In that case, why even bother with C2PA? But that's a rhetorical question and the answer carries no weight in court. The fact is, the metadata identifies Narayen and the cryptographic signature for authentication and provenance says the metadata can be trusted.
    • Worse: even if the signature appears fake, it doesn't mean Narayen didn't do it. Remember: there is other metadata besides C2PA's metadata that names Narayen. There are also multiple pictures naming him, and not just one photo.
    • This is a very technical topic. Really technical explanations and jargon will quickly confuse the courtroom. Remember: The judge still thinks a FAX machine is a secure data transmission system and most jury members don't know the difference between "Google" and "the Internet". Assuming you can identify how it was forged, communicating it to the judge and jury is an uphill battle. (Creating a forgery in a live demo would be great here. However, while live demos are common in TV crime shows, they almost never happen in real life. Also, live demos can go horribly wrong, as demonstrated by the OJ Simpson trial's "if the glove fits" fiasco. No sane attorney would permit you to perform a live demo.)
    • Even if you, as the expert, can explain how it was forged, your testimony will just appear to be you trying to discredit the certified authenticated provenance information. Remember: C2PA claims to be tamper-evident. But in this case, everything checks out so there is no evidence of tampering.
    • Most people can't afford, don't have access, and/or don't know an expert witness who can determine if the C2PA metadata is legitimate. As the CEO of Adobe, using his own employees as experts carries little or no weight. (Any expert from Adobe is clearly biased since Narayen can fire them at any time.) Law enforcement and the courts will assume: If it says it's from Narayen and the tamper-evident C2PA says there is no evidence of tampering, then it's from Narayen.
    • Let's pretend that you can identify the forgery, demonstrate that C2PA does not work, and communicate it clearly to the court. It's now your word against every technical expert at Microsoft, Intel, Sony, Arm, DigiCert, and a dozen other big tech companies. Who has more credibility? Hundreds of highly paid security and cryptography professionals or you? And remember: these big companies have their reputations on the line. They have a financial incentive to not be proven wrong
    As an online service provider, I've interacted closely with NCMEC and ICACs for over a decade, and with attorneys and law enforcement for even longer. I can tell you that the prosecution won't spend much effort here. The cops will knock on his door. Narayen won't be able to convince the courts how it happened, and he'll either be found guilty or his attorney will convince him to take a plea deal.

    Narayen's only option in this fictional scenario is to demonstrate that C2PA does not verify the metadata, the metadata can be altered, anyone can sign anything, and C2PA doesn't provide validated provenance and authenticity -- even though it's in the name: C2 PA . I'm not a lawyer, but I think this option also could show that every company currently selling products that feature C2PA are actively engaged in fraud since they know what they are selling doesn't work.

    Either C2PA doesn't work and Narayen walks free, or C2PA works and this forgery sends him to jail. That's the worst case scenario, and it's very realistic.

    Truth or Consequences

    If this fictional child porn example seems too extreme, then the same application of fake C2PA metadata works with propaganda from wars (Ukraine, Gaza, etc.), insurance fraud, medical fraud, fake passport photos, defective merchandise claims at Amazon and Etsy, altered photojournalism, photo contests, political influences, etc. At FotoForensics, I'm already seeing known fraud groups developing test pictures with C2PA metadata. (If C2PA was more widely adopted, I'm certain that some of these groups would deploy their forgeries right now.)

    To reiterate:
    • Without C2PA: Analysis tools can often identify forgeries, including altered metadata.
    • With C2PA: Identifying forgeries becomes much harder. You have to convince the audience that valid, verifiable, tamper-evident 'authentication and provenance' that uses a cryptographic signature, and was created with the backing of big tech companies like Adobe, Microsoft, Intel, etc., is wrong.
    Rather than eliminating or identifying fraud, C2PA enables a new type of fraud: forgeries that are authenticated by trust and associated with some of the biggest names on the tech landscape.

    A solution for assigning authentication and provenance would go a long way toward mitigating fraud and misrepresentation. Unfortunately, the current C2PA specification is not a viable solution: it fails to authenticate, is trivial to forge, and cannot detect simple intentional tampering methods. It is my sincerest hope that the C2PA and CAI leadership tries to re-engage with the wider tech community and opens discussions for possible options, rather than making sudden unilateral decisions and deploying ad hoc patches (like requiring pay-to-play) that neither address the basic problems nor encourage widespread adoption.

    Značky: #Forensics, #Network, #Security, #FotoForensics, #Programming

    • chevron_right

      Failures in Automation

      pubsub.slavino.sk / hackerfactor · Monday, 11 December - 22:47 edit · 12 minutes

    I usually follow a simple mantra: if it can be automated then it should be automated. However, while automation may seem like a great solution, it introduces a new set of perils. Unexpected critical points, untested conditions, and unforeseen failures can lead to new problems.

    At FotoForensics , Hintfo , and my other services, I use a lot of automation. I have tools that detect and respond to network attacks, terms of use violations, server monitoring, and basic system management. However, I never just deploy an automation and walk away. I never just assume that it works.

    When I release new automated solutions, I usually spend as much time testing, monitoring, and adjusting the new system as I do developing it. Sometimes new problems are identified quickly and are easy to resolve. Some issues take a while to surface, and some have no easy solutions. A few times, I've had automated scripts fail so badly during the initial testing that they had to go back to the drawing board or were treated as a failure and a good learning experience.

    Unfortunately, my overly-cautious approach to software deployment seems to be the exception and not the norm. Often, companies deploy automated solutions without any deep review.

    What's for dinner?

    At our local grocery store, they removed more than half of the checkout aisles and replaced them with self-checkout kiosks. The space required by 3 manual checkout counters can fit six kiosks. In addition, one employee can (frantically) manage all six kiosks rather than one employee per counter. When I visit the grocery store, they usually have 10 of 16 kiosks open with little or no wait. (They open the other six during the busy periods.) And while they still have eight counters available, usually only 1 or 2 are open for the people who don't want to use the self-checkout kiosks.

    On the positive side:
    • Automation seems to reduce lines at the grocery store. I rarely have to wait for the kiosk. In contrast, grocery stores that don't have automated kiosks usually has long lines even when the store doesn't seem very crowded.
    • Around here, every store has a "Help wanted" sign. Automation allows the store to operate with fewer employees.
    • I'm not advocating the replacement of employees with machines. However, the existing employees can be moved to other essential tasks, like restocking shelves and helping customers.
    Unfortunately, there is a down side to checkout counter automation:
    • Saving money on employees does not translate into lower prices. Because there is less human oversight, it's easier to steal from these automated kiosks. Why pay extra for that "organic" apple when you can choose the lower-cost generic (inorganic?) apple from the kiosk's menu? Or maybe scan one box of Mac and Cheese but put two in the self-bagging area? I suspect that the increase in theft is part of the reason that prices have increased even through employee-related expenses has decreased.
    • These machines are not always easy to use and customers often need assistance. At my grocery store, self-scanning coupons is usually more miss than hit.
    Some stores are starting to re-evaluate their automated self-checkout systems. A few are even removing them altogether.

    The problem isn't that the kiosks don't work. Rather, while automation solves some problems, it creates opportunities for new problems.

    Neither snow nor rain nor heat nor gloom of night

    The grocery store kiosks usually work well. They are mostly easy to use. (Even with coupon scanning problems and the occasional double-scan, I'd give them very high marks.) However, not all kiosk are designed well.

    For example, every few years, my bank installs a new ATM machine. Sometimes the machine interfaces are really easy to use, while other versions are confusing and take twice as long to use. You can tell when the ATM interface is bad because more people go into the bank for assistance.

    And then there's the US Post Office. It's the holiday season and the Post Office has long lines of people waiting to mail gifts and packages. Our Post Office has a couple of automated self-service kiosk (SSK) machines, but almost nobody is using them. These SSKs have no waiting line and are located right next to the long line of people who are waiting for assistance. (You can't miss these machines!)

    It isn't that the SSKs are down or out-of-order. Rather, the user interfaces are designed so incredibly poorly, that people would rather spend 45 minutes waiting in line for human assistance. The few people using these SSKs seem to fall into one of three categories:
    • They spent a lot of time earlier learning how to use them, so they know what selections to use.
    • They think they might battle the confusing interface faster than waiting in the long line. (Sometimes this gamble pays off, sometimes it doesn't.), or
    • They have a friend holding their place in line, just in case they can't figure out how to use the SSK. (Seriously. You often hear someone say "can you hold my place in line for a moment?" before trying their luck at an SSK.)
    Then again, if you succeed in using the Post Office's SSK, you might get asked by a few strangers to help them, too.

    Et tu, Etsy?

    Etsy is an online marketplace for handmade goods and is the epitome of automation gone wrong. For example, earlier this year , Etsy's rolled out a filter to remove any Etsy listings that also appeared on the Chinese (cheap / knock-off goods) site called Temu. The problem is that some Temu accounts appear to steal images and product descriptions from Etsy. As a result, legitimate Etsy sellers are having their handmade products delisted. In some cases, Etsy even shuts down the legitimate Etsy shop. A few examples from the Etsy forums:
    • " Etsy removed my handmade ceramic listing ": A user has repeatedly had their handmade bottles removed. Keep in mind, the bottles look handmade, are signed by the seller, and there are videos showing how she makes the bottles. Another Etsy seller responded:

      I had a listing removed two months ago and after opening a support request with videos showing my process I was able to get it reinstated after two weeks of waiting. However, the exact same listing was removed again last week for the same reason so even if you prove you’re handmaking the item, they may still remove it in the future. I’m hoping if they reinstate a listing it doesn’t still count as a strike against your shop since it seems this will continue to happen (in my experience)


    • " China has stole my product and photos! I can't get Etsy to help. ": A seller laments that their product photos are being stolen by knock-off sellers from China.
    • " Etsy destroyed my store. They are responsible for their own downfall. ": Again, legitimate handmade items were delisted by Etsy because knock-off sellers copied her photos.
    • " I need some help with handmade policy violation - at wits end ": The seller uses an engraver and uv printer and is having custom items removed. One of the replies identified the likely cause: "The bots are taking over, so they may not be sophisticated enough to distinguish between similar silver tone keychains over a pure white background with similar text/sayings, etc."
    A few Etsy users have posted advice for getting products or account reinstated, but your success at any of these tips is really hit-or-miss.

    Etsy's Amber Alert

    This isn't the only automation and filtering problem at Etsy. Back in 2016, a child died after choking on amber teething beads that had been purchased on Etsy.

    I'm not going to dive into this loss of a child or even the merits of the court case. Rather, my focus is on Etsy's reaction: they banned products that included the word "amber". The initial ban appears to have been automated and does not pay attention to the context. "Amber color?" Nope. "Amber is my first name?" Nope. This forum discussion really describes the problem well (my bold for emphasis):

    Listings deactivated for using the word amber to describe glass bead colour
    by TheBeckoningCat Post Crafter ‎05-14-2022 10:36 AM

    Received a deactivation notice from Etsy regarding the word "amber" appearing in my description. I cannot get thru to Support by any means. In my description of the bracelet, I mentioned that the Swarovski squares were of an "amber color". Swarovski, to my knowledge, does not offer any crystals made from amber. I have since changed the description to "honey brown color". Perhaps in the future, when your "bot" or algorithm comes up with results for people selling "amber" a human actually review them to determine whether actual amber is being sold. I just did a simple search for the word "amber" and came up with over 132,000 results, just in the US alone. The people who are selling Amber Heard T-shirts will have to revise their listings as Johnny Depp's ex-wife.


    Reply by BlackberryDesigns Conversation Maker ‎05-14-2022 11:50 AM

    I would get rid of the word Swarovski , also. I got hit on that one from my website. Apparently since making the shift and removing their crystals for sale to the artists, they are doubling down on their brand name and do not want it associated with us little guys.


    Reply by JDTotesnDolls Community Maker ‎03-14-2022 04:17 PM

    The bot is designed to look for the word amber and not to make decisions on whether or not it is a color or an actual bead.

    Contact Etsy and ask them to manually review the listings and with the removal of the word, they may okay a relist,

    Unfortunately, Etsy's use of the ban-hammer seems inconsistent at best . The Etsy bots appear to use poorly defined rules that are applied irregularly and enforced occasionally, resulting in confusion among their sellers. Adding to the problem, the only real solution is to talk to a human in their support group, and that seems like a Herculean task.

    The problems related to bad bots and automated filtering is more than just an inconvenience for users. Some Etsy sellers have reported that scammers are using Etsy's poor filtering services as a front for blackmailing sellers. For example:

    ( I'm the bot in charge of security ) HELP
    by KummLeather Inspiration Seeker ‎09-15-2023 04:49 PM

    hello, I received such a message, does anyone have information about this?

    "Hello, my name is Jackson, I'm the bot in charge of security. You have received a complaint that you are a scammer, in order that your account was not blocked send me your name and surname, as well as your e-mail.
    You will need to be verified today, in the worst case your account will be blocked.
    Send me your first name, last name and e-mail!!!!!"


    Etsy support bot
    by Bellayall Inspiration Seeker 09-15-2023 03:56 PM

    I just received a message from someone claiming to be an Etsy support bot. He said that the product I shared was reported and asked for my name, surname and email address.I'm new to Etsy and didn't know it was a scam so I gave her what he wanted and then he blocked me. Is this a problem please help


    "Is this a problem"? Yes. Yes it is.

    Choke Points

    A heavy dependency on automation also creates central points of failure in critical systems. Earlier this year, a failed update by the FAA caused thousands of flight cancellations across a wide range of airlines. A few months later, Southwest had their own software failure that caused them to cancel thousands of flights.

    Keep in mind, these were not cyber attacks. These were just system failures in critical systems. These single points of failure were associated with the automation of their day-to-day operations. This is why a single computer outage can stop both automated and manual operations.

    Ideally, critical services should have failover backups. Ideally, the operators should have tested for these scenarios and had viable workarounds. However, automation often results in unidentified choke points and a lack of immediate options.

    This limitation is not restricted to critical infrastructures. Our local library has had a few instances where the self-checkout book kiosks have gone offline. One librarian said the kiosk outage was related to their new weekly backup system. Fortunately, they do have a workaround: They put up "out of order" signs and manually handle checkouts at the front desks.

    analysis.php?id=8a4beb98e7cdba55c314fc9c36f2e621cd2df8e2.114509&fmt=orig

    Welcoming Our New Robot Overlords

    Unfortunately, I'm seeing more instances of people relying on automation without testing. I'd cite specific instances of people mistakenly relying on ChatGPT, but there were too many references to choose from! Here's a few examples:
    • Legal : An attorney used ChatGPT without checking the results. It made up fake citations. The attorney ended up getting sanctioned by the court.
    • Legal : A different attorney was suspended after using ChatGPT and not noticing that it was citing fake cases.
    • Legal : Another attorney was fired after using ChatGPT.
    • Medical : Researchers found that ChatGPT was usually wrong when responding to a medical question. (It was correct 10 out of 39 times.)
    • Summarization : When asked to generate automated summaries, ChatGPT often makes up results.
    • Factuality : Poynter reported that, when fact-checking results, ChatGPT sometimes reached accurate conclusions, but "struggled to give consistent answers, and sometimes was just plain wrong."
    Just because a solution is deployed, doesn't always mean it's a perfect solution.

    A few companies have managed to get automation to work quite well. For example, I recently needed to return an item to Amazon. I went down to my local Whole Foods (where Amazon has a kiosk) and was able to effortlessly follow the instructions to return the item. (The hardest part was opening the cellophane bag that they provided.)

    I also drove through Kansas this summer. Part of the trip included a tollway. I had affixed the free tollway pass to my car a month earlier. As a result, I managed to bypass the long line of cars at the tollbooth and I never came to a stop. This was easy and much less expensive than paying at the booth.

    It seems to me that the success of the automation really comes down to the company's focus. Companies that are consumer-focused, like Amazon, the Kansas tollway, and the grocery store, have fewer usability problems but introduce new potential venues for fraud. In contrast, profit and process-oriented automated, like Etsy and the Post Office, seem to generate more problems than they attempt solve.

    Značky: #Forensics, #Programming, #AI, #Network

    • chevron_right

      Learning New Tricks

      pubsub.slavino.sk / hackerfactor · Saturday, 25 November - 22:38 edit · 5 minutes

    I recently presented a short version of my " No-NOC Networking " talk to a room full of managers C-level executives, policy makers, and technical evangelists. These are not the people in a network operation center (NOC), but it did include people who manage the people in the NOCs. For this presentation, I lowered the technical level to someone who hasn't taken a basic networking class in years. The talk was very positively received.

    Most of the questions were exactly as I expected, like "Where can I get your software?" and "What were those Linux commands to turn off scanner responses?" (The answer to both is at the Nuzzle web site). But there were also a few people who had the same kind of unexpected request: Where can I learn more?

    They weren't asking about learning more ways to deter network attackers. Rather, they were asking about where they could learn more about security in general. A few people confided that they weren't sure if they were too old to learn.

    Full stop: You are never too old to learn about security. Whether you are looking for a late-in-life occupational change or just a new hobby, you can always learn this as a new skill.

    Where to Start?

    I often hear people talking about starting with a certification, like CISSP , CEH (certified ethical hacker), or some of the SANS courses. However, I think that's like diving into the deep end of the pool. (As an aside: this isn't a recommendation for any of these certifications. As someone who tracks down cyber bad guys for a living, I've seen more unethical behavior from people with CEH credentials than any other group!)

    Before you get too deep or start paying for some kind of education, start by finding out what you like. There isn't just one kind of "computer security". What's your interest? Here's a short sample of different cyber areas:
    • Cryptography (great for math and puzzle people)
    • Network security
    • Policy
    • Reverse-engineering
    • Social engineering
    • Red team (offense)
    • Blue team (defense)
    • Hardware and IoT (internet of things)
    • Software (fuzzing is fun!)
    • Physical security, including lock-picking
    • Anonymity and privacy
    • AI and counter-AI (yes, there's a security element)
    This is far from everything. Most of these categories include forensics (detection) and anti-forensics (avoiding detection).

    Don't know which one to choose? Try them all and find what you like! (Personally, I like the weird stuff, like file format forensics and packet-level network forensics. But I've also worked with everything else in this list.)

    Groupies

    Besides trying to learn on your own, there are tons of conferences . Most weeks have multiple conferences worldwide. While conferences usually require paid admission, a few are free or even online.

    There are also tons of meet-up groups. Many of these groups are part of larger organizations. For example:
    • The Open Worldwide Application Security Project (OWASP) started with a focus on web-security best practices. However, it has evolved. Today, it mostly focuses on policies and processes for securing generalized applications. The organization has individual satellite chapters that hold monthly meetings all over the world.
    • DEF CON is a huge hacker conference that is held once a year. However, it has spawned lots of smaller " DEF CON groups " that meet monthly. Most are identified by their telephone area code. For example, Denver is area code 303, so their local DEF CON group is " DC303 ". Unlike OWASP, the DC groups usually have topics that span the entire spectrum -- software, hardware, social engineering, etc. Often, they include live demonstrations and how-to information. Some of these groups meet in person, while others are online.

      If you're new to DC groups and you don't like one topic, then wait a minute and there will be a different topic. The only warning: the content can be extremely detailed and discussions may quickly go over your head.
    Most of these groups are friendly, helpful, and welcoming to new people. Also, most of them look down on using technology for evil, malicious, or illegal purposes. You're not going to learn how to compromise your ex's Facebook account or how to steal money from an online transaction.

    Can't find a local group? Try using Meetup . Search for your city and the "Technology" category. In Fort Collins (where I am), we have "Women Who Code", "Fort Collins Internet Professionals" (FCIP), Northern Colorado Hackers (NoCo Hackers), a couple of Python developer groups, web developer groups, and more. And keep in mind, Fort Collins isn't a "big city"; big cities have even more groups. Unless you're out in the middle of nowhere (sorry, Casper, Wyoming ), there's probably something nearby.

    Hands On

    Beyond groups, many organizations offer various games where you can try tools, techniques, and methods in a controlled and safe environment. (I liken it to how cats sharpen their claws.) The games often include different skill levels, from newborn novice to guru expert. In my opinion, the real-world problems are nowhere near as difficult as the harder games.

    So how can you find these games?

    If you ask in any of the social groups (OWASP, DC, etc.) then someone is bound to provide some suggestions. But even without group participation, there are lots of 'capture the flag' (CTF) opportunities out there. These include challenges and puzzles that award points for completion. Some are meant for individuals, while others permit teams. (Often, teams are looking for new members. It's usually easy to find a team that will take on a new person.) Some of the better-known CTFs include:
    If you really enjoy these CTF games, then there are competitive teams. Many of the larger conferences have CTF contests with prizes for the winners.

    Personally, I find game play to be a great way to teach and test knowledge. At my FotoForensics service, I include a few ' Challenges ' (Tutorials → Training → Challenges) where people can try to evaluate pictures in a controlled environment.

    New Tricks for Old Dogs

    Just as there are different security focuses, there are also different ways to learn. Regardless of whether you prefer self-paced, hands-on, one-on-one, or a classroom environment, there are plenty of options. After you find your interest and get a taste of the technologies, then you can start focusing on formal certifications and professional education... or you can be an informed amateur.

    Most companies, universities, and news outlets focus on cybersecurity as a career . (OMG! 650,000 cyber jobs are now vacant !) However, it doesn't have to be a career. These topics have benefits even in small amounts. With a little practice, you will start noticing fraud and scams, identifying poor security practices, and distinguishing the real threats from hype. The fundamentals used to include reading, writing, and arithmetic. Then it expanded to some computer literacy. Today, a little computer security knowledge is becoming a fundamental requirement. It's time to start learning!

    Značky: #Forensics, #Programming, #Security, #Conferences, #Network

    • chevron_right

      Out with the Old

      pubsub.slavino.sk / hackerfactor · Sunday, 12 November, 2023 - 16:31 edit · 8 minutes

    I often use DVDs for watching videos. To me, the quality is as good as anything streaming. But it has an added benefit: I don't have to deal with commercials or "buffering" issues. My local library has a really good DVD collection. I use the library for those "watch once" movies. When I find a movie I expect to repeatedly watch, I buy it on DVD. This way, I don't have to visit the library the next time I want to watch it. (I'm still on the Library's waiting list for the Barbie movie. I'm expecting it to be a fun movie, but one of those "watch once" videos. I suspect it won't be like Firefly or Rogue One, which I bought on DVD. I watch those movies at least once a year.)

    As another benefit, DVDs often include lots of extra features that you won't find on most streaming services. If you want to see all of the funny outtakes from Monsters Inc., or hear the director's commentary about Buffy the Vampire Slayer's "Hush" episode (my personal favorite), then you really need the DVD.

    My DVD player, the one that's hooked up to the TV, broke last week. It wasn't that it couldn't play anything. Rather, the video card in it died. The HDMI connector didn't work, the S-Video didn't work, and the RCA connectors for red, green, and blue only worked for red and green.

    I was really on the fence about whether to get a replacement DVD player or just stream everything from my home media server. (My Synology RAID includes a Plex media server that streams to my Roku.) Among other things, Netflix recently decided to stop their DVD-by-mail rental service. (Before streaming, Netflix began with mail-order DVD rentals.) This was followed weeks later by Best Buy announcing an end to DVD and Blu-ray. Even my newest computers came without DVD players. (Want to install media? Use a USB drive.)

    Just as we all moved from vinyl records to CDs and then MP3s, it looks like the age of the DVD is over. And this is when I decided to replace my DVD player. I guess I bought one of the last dedicated DVD movie players.

    My replacement DVD player is physically smaller that my old one (a fraction the size) and only cost a few dollars. It supports HDMI and the RCA yellow/red/white connectors. My old DVD player also supported USB and cable TV inputs, and it could record to DVD-RW media. But I hadn't used those features in decades. The replacement is just a DVD player, and that's fine for my needs.

    You'd have to wait but you could hear it on the AM radio

    It's not just DVDs that are going away. A few months ago, it was announced that automakers want to remove AM from the radio players . The technical reason is that electric vehicles generate a lot of radio frequency (RF) noise that interferes with the AM radio reception. Shielding the radio from the RF noise would increase the vehicle costs.

    The proponents for keeping AM radio have pretty weak arguments. They point out that it's really easy to set up an AM transmitter and if there's ever a big emergency, then AM radio will work when all else fails. However, if you really want to help in an emergency, then get your amateur radio license. When there are big disasters, likes earthquakes, hurricanes, and wars, the ham radio operators are usually the first people to get the word out.

    I'm not sure how I feel about AM radio going away. I own an antique radio (a 1930 Grigsby-Grunow Majestic 131 lowboy). Normally, it only receives two stations: religion and religion+sports. I built a tiny AM radio transmitter that plugs into the headset port on my computer. It's very low power and has a range of a few feet. Using this, I can stream music from my computer to the old radio over an AM signal. However, other than running my very tiny AM station for my antique radio, I haven't used AM in decades. When driving across country, I might scan the FM stations but I never switch to AM.

    Yes, a collect call for Mrs. Floyd from Mister Floyd. Will you accept the charges?

    Another thing that is going away are landline phones . The plain old telephone service (POTS) is a relic. In 2019, the FCC lifted regulations requiring carriers to provide POTS/landline support. And earlier this year, AT&T (one of the three remaining baby bells ) decided to drop landline support .

    Today, almost everyone uses cellphones. This simplifies connectivity for most carriers and metro areas. In particular, the carriers don't have to run copper wires to every house; they just put up more cell towers. However, if you're in very rural areas (like driving through Idaho, Wyoming, Montana, or the Dakotas), then there are large swaths of land without cell coverage. A landline used to be the only option, but that option is going away.

    Personally, I moved my landline phone number to a mobile service years ago. However, I use a 'base station' to connect to the service. In my office, there's an actual phone with a handset on my desk. The phone plugs into the base station which bridges to the cell service. When an incoming cellular call comes in, the base station makes my phone ring. I do this because I find a real phone handset easier to use than a regular cellphone.

    Of course, there is a downside. The base station can't receive text messages. In fact, none of my phones have text messaging enabled. For me, this is more about costs. For most carriers, text messaging requires a data service, and cellular data services are both slow and expensive. On top of this, there are apps on my phone that cannot be disabled and will happily use any network connectivity. This means that they will run up my data usage even if I don't want them to. Rather than fighting with them, I just don't have a data plan. Unless I'm on my home or office WiFi, my phone can't go online -- and I'm happier this way.

    Can you hear me now?

    Unfortunately, having a phone with a data plan is becoming mandatory.
    • Restaurants have stopped handing out pagers for people waiting to be seated. Instead, they want your cellphone number. This way, they can text you when your table is ready. I've gotten lots of blank stares when I've said, "I don't have a cellphone." (Well, I do, but I don't have text messaging. And even if I did, the paranoid security freak in me doesn't want to give out my number.)
    • One of my webcams is inaccessible from anything except a cellphone. I have no idea why (other than the vendor wanting to track my cellphone usage).
    • Want to buy food or drink on an airplane? Lots of airlines have gone cardless. You register with your cellphone and then purchase airplane snacks with your phone. Of course, I (1) refuse to install their app due to privacy issues, and (2) don't have a data plan for registering the app. My choices are to starve or (more often) carry food onto the plane.
    • Rental car places just assume that you know your car's parking spot because they texted it to you. But if you don't receive text messages, well, hold on while they get a supervisor.
    • My bank recently forcefully enabled two-factor authentication. The good news is that 2FA is more secure. The bad news is that they kept trying to send a text message to my landline (no texts) phone number. When speaking with their tech support, it literally never occurred to them that someone would do online banking without SMS support.

    Home Sweet Home

    It's not just me. One of my friends recently bought a house. This turned out to be much more complicated than he expected:
    • The real estate company was completely confused by the fact that he didn't receive text messages about his closing papers. They sent it to a phone number that doesn't have text messages.
    • Instead of texting, they emailed him links to the ownership documents. Some of their links only worked with Chrome. He almost exclusively uses Firefox.
    • They were surprised that he couldn't just sign the papers on his touch screen or with a mouse. He only has a laptop and it has one of those eraser-nub mice in the middle of the keyboard. No touch screen, no mouse, no trackpad. Remember hearing about the old old days when illiterate people could sign using an "X" ? That's how he bought a house.
    It's not that my friend is super paranoid like me. He's just at that age where he doesn't want to upgrade unless it's absolutely required. Most of the time, upgrading means a learning curve and it's not worth the inconvenience. And in this case, buying a house shouldn't require a new computer plus a new cellphone with a data plan.

    Coming Soon?

    When I check out at the store, the cashiers always ask for an email address or phone number. "Is it required?" "Uh, no." But if I say 'no thank you', then they enter in something anyway. (Like the store's phone number?) It may not be required, but they cannot complete the transaction without entering something.

    Of course, all of this makes me wonder about the gap between the haves and have nots. If you're poor, homeless, or simply can't afford a phone, then you are locked out of lots of things. Without a cellphone, the simple tasks that we take for granted, like using a bank account or buying something from a store, becomes a serious hardship. Moreover, having a cellphone isn't free. If you're on a poverty-level fixed income, then the phone is often one of the first things to go.

    I'm fine with using new technology for convenience. However, companies need a plan for users who don't have (or don't want) the new technology. There's more of us than you might think.

    Značky: #Politics, #Network, #Unfiction, #Financial, #Privacy

    • chevron_right

      Motion Tracking

      pubsub.slavino.sk / hackerfactor · Sunday, 29 October, 2023 - 19:50 edit · 9 minutes

    A few blog entries ago, I wrote about problems with helping other people with their computers. (Most of the problems were due to the software and not the people.) This turned into a discussion about automating the monitoring of the elderly and making a minimal " Are you okay? " system. My solution uses a very simple script to monitor their computer for signs of activity since they all use their computers fairly regularly. If they change their habits and miss a check-in window, then it triggers an alert.

    While monitoring the computer is a good start, it's still not ideal. Recently I've been evaluating miniature embedded systems and homemade IoT devices for simple automation tasks. These included Raspberry Pi miniature computers as well as Arduino embedded controllers. In the comments to my blog entry, Matt and Jon suggested that I look at the ESP32. It's the same concept as an Arduino, but it has built-in WiFi and bluetooth. (Now I just needed a reason to try it out.)

    One of my friends has an elderly parent who lives alone. While this person regularly uses the computer, my friend is concerned about his father falling and going unnoticed for hours. We worked out an inexpensive solution: a network-based motion sensor. (Woo hoo! A reason for getting an ESP32 and a guinea pig for testing!) Here's the idea: we will place 2 of these wifi motion sensors in my friend's elderly parent's home. They are going to be located near the kitchen and the main hallway. These are areas that he walks by often. As a motion sensor, it will be triggered each time his father goes to the kitchen, walks to the bathroom, TV room, bedroom, etc. ("Or you could just call me. I like it when you call." Nope -- remote wireless sensors!)

    First Attempt

    An Arduino costs about $15 USD but needs a network adapter that costs another $15. In contrast, I purchased my first three ESP32 controllers for about $6 USD each (three CPUs plus development boards for $22 total). Unfortunately, I had to return them because each was faulty. Here's the link to the item at Amazon . However, I do not recommend these. While the development boards are fine, each ESP32 had the same problem: bad ground.

    analysis.php?id=ab801a37c8d5240f8142c22b6a55fc02a6428467.433255&fmt=orig&size=600

    The ESP32 (ESP32-WROOM-32, 38-pin) has three ground pins (in black).
    • Good ground . The ground pin in the top right (opposite corner from 5V) is good. Use it for everything.
    • Bad ground . The ground pin on the left (6 pins from the bottom) is bad. With the power off, I hooked a meter from good-ground to bad-ground. It does have connectivity, but there's a slight delay as the resistance drops to zero. This suggests that there's a capacitor or inductor or something sitting in front of the ground pin. When power is first applied, this pin has zero connectivity for a few microseconds.

      I noticed that the device wasn't running the program when power was supplied over USB. I had to press the reset button before the program would run. Other people came up with complicated workarounds (using capacitors and resistors) that effectively press the restart button a moment after power is applied.

      Another person noticed that, if you want to use external power, then you need to use good-ground and not bad-ground. Otherwise, it won't boot. I took his idea and tested it: If you jumper good-ground to bad-ground, then it boots properly when power is applied over USB. As a power source, the integrated USB port appears to be hooked to bad-ground. Don't use bad-ground.
    • Ugly ground . The third ground pin is on the right, 7 pins from the top. According to my meter, this pin has zero connectivity. At all. It's a floating ground pin. At best, it won't work for you. And depending on your electronics, you might end up not having it work, frying something, or causing a fire. Do not use!
    If the developers can't get ground right, then who knows what else is wrong with the hardware. I returned it.

    I want to emphasize that not every ESP32 has this problem. This appears to be a bad batch. (Most likely, someone bought up a bunch of known-bad chips and tried to sell them for cheap on Amazon.) However, this experience is bad enough to scare me away from anything with this form factor. Fortunately, there are lots of other options.

    Attempt #2

    A couple of my friends suggested that I look at the M5Stamp-S3 from a company called M5Stack. This is a postage-stamp size ESP32 controller with built-in WiFi and bluetooth. It also has a built-in multi-color LED. And best of all, the version that I got had all the pins already soldered in, so I can plug-and-play without soldering. (I didn't get it from M5Stack because they use a Chinese credit card processor who is associated with fraud and potentially unsafe . Instead, I paid $0.50 more and got it from DigiKey . And since the price for shipping didn't increase with quantity, I ordered three of them.)

    analysis.php?id=e316fd71ddfb5b3102fa5fe60783cf01687aa747.68035&fmt=orig&size=600

    This is a device that I can definitely recommend. It was simple to hook up to the sensor rig that I had originally created for the ESP32. (The pins are in different places, so be sure to redo the wiring.)

    The Sensor

    I configured the M5Stamp as a controller for a wireless motion sensor. For the sensor, I used a tiny microwave doppler radar switch. While it uses microwave frequencies, it's so low power that you won't fry anyone. The microwaves go through wood, drywall, doors, etc. The only things that really stop it are metal and water. Humans are basically large bags of water and create a strong microwave reflection. As you walk past the sensor, it triggers for 2 seconds.

    The thing that I like about this sensor is that you don't have to drill a hole in the wall. Just mount it on the wall near an electrical outlet. It has a range of over 9 feet (3 meters). However, I'm currently seeing brief flashes of activity when there shouldn't be any. (Either there's a ghost in my house that is triggering the motion sensor, or there's a little chatter in the electronics.) I probably just need a little shielding or to filter the chatter in software. But I think this is really close to being ready. (Maybe another 1-2 weeks.)

    Here's the first attempt at the box:

    analysis.php?id=8260eb8c6fc7948b01b2dd56eb5d6dcc28cc84df.361281&fmt=orig

    • The orange block is the M5Stamp.
    • The M5Stamp is plugged into an 11x5 mini breadboard.
    • The breadboard is sitting on top of a piece of cardboard with aluminum tape on one side. The aluminum tape blocks any RF noise from bothering the motion sensor and the cardboard prevents the metal tape from shorting out the electronics on the sensor.
    • Under the aluminum and cardboard is the motion sensor. (Just the pins are peeking out the top.)
    • The electronics are mounted in a box. I have a piece of foam (not shown) that keeps everything from moving around. (The box is much larger than needed because it was originally designed for the ESP32-WROOM, which is a much larger controller chip. The next version will be smaller.)
    • The entire thing plugs into a USB cable and power plug.
    When it's done, it will just need to be plugged in and hung on the wall at about ankle level. (You can't see it in this photo, but there's a hook hole for a nail at the top of the box.) It can easily be hidden out of view.

    My program for the M5Stamp does a few things:
    1. It connects to the WiFi network.
    2. It watches the motion sensor.
    3. When there is any activity, the on-board LED turns on. No activity turns the LED off.
    4. I track when the sensor was triggered and maintain a delay of 20 seconds for the motion event. This way, small chattering is ignored until there is at least 20 seconds of inactivity.
    5. At the start of each motion event, it uses the WiFi to contact a reporting web page. It only reports that activity was seen near the sensor.
    Now, if there isn't activity for a few hours, then either (A) the person isn't home (check their iPhone and friend tracker), (B) the person is asleep (check the clock and don't panic at night), or (C) the person has fallen and needs help. Moreover, it's more reliable than waiting for the person to use their computer and trigger any proof-of-life script. It's also non-intrusive; while we know there is movement in the house, we don't know what he's doing or where he's going. (It's not like putting a camera in every room in the house.)

    Other Uses

    Using a wireless motion sensor is a great solution if you don't have pets. However, dogs and cats will trigger the sensor. I mentioned this to a friend of mine and he loved the idea for his elderly dog ! He currently uses multiple PIR (passive infrared) sensors to alert him when the dog walks around at night. (If he doesn't get up when the dog gets up, there's going to be a messy accident.) A single WiFi motion sensor would cover the entire area without being as directional as a PIR.

    Another friend of mine lives with a parent who has dementia. He's planning on tracking every external door of the house, "just in case she makes another escape attempt." (It sounds funny, but it's really a serious problem.)

    Version 2

    After I finish the first version, I'm planning on making a second one for myself. It's going to monitor the front door because UPS, FedEx, Amazon, and the postal service have all decided to never ring the doorbell. They walk up, drop off packages, and run away. I'm usually in the house. If I'm near the door, I might hear the thump of a box and realize there's a package. I do have a camera by the door, but it can take up to 60 seconds before my phone beeps. (The notification is great if I'm next to my phone, but often my phone is in a different part of the house.)

    The next wireless monitor will watch the front of the house. If anyone approaches the doorstep, it will let me know immediately. I'm also going to have a second sensor inside the doorway. This way, someone walking out of the house will trigger the inside sensor first, allowing me to know when someone is leaving and not trigger the doorbell. In contrast, someone walking up to the door first will trigger the door sensor. And since it's microwave, I don't have to worry about drilling a hole in the wall or mounting something outside in a weather-resistant case.

    With these inexpensive and customized IoT systems, I might end up with a smarter home that works the way I want it to work -- and at a fraction of the price of a mass-produced solution.

    Značky: #Network, #Security, #Programming

    • chevron_right

      Throwing Shade

      pubsub.slavino.sk / hackerfactor · Sunday, 22 October, 2023 - 02:26 edit · 7 minutes

    As part of FotoForensics, I try to track major occasions, such as holidays, weather warnings, and astronomical events. Often, I'll see fake photos of the occasion before it happens. I might see photos of a major blizzard burying a neighborhood days before the storm hits or a beautiful picture of a full moon a week before the full moon. What I'm usually seeing are forgers creating their pictures before the event happens.

    Similarly, I often see fakes appear shortly after a major event.

    Last Saturday (Oct 14), we had a great solar eclipse pass over North and South America. This was followed by some incredible photos -- some real, some not.

    I tried to capture a photo of the eclipse by holding my special lens filter over my smartphone's camera. Unfortunately, my camera decoded to automatically switch into extended shutter mode. As a result, the Sun is completely washed out. However, the bokeh (small reflections made by the lens) clearly show the eclipse.

    analysis.php?id=1108ecd1fc724ecc75440e23b1a924e9660583a5.2769929&fmt=orig&size=600

    I showed my this photo to a friend, and he one-upped me. He had tried the same thing and had a perfect "ring of fire" captured by the camera. Of course, I immediately noticed something odd. I said, "That's not from Fort Collins." I knew this because we were not in the path of totality. He laughed and said he was in New Mexico for the eclipse.

    Ring of Truth

    Following the eclipse, FotoForensics has received many copies of the same viral image depicting the eclipse over a Mayan pyramid. Here's one example:

    analysis.php?id=0a489b546ab8883e5a8d2b8a6d2a0d71daaf78c1.53573&fmt=orig&size=600

    The first time I saw this, I immediately knew it was fake. Among other things:
    • The text above the picture has erasure marks. These appear as some black marks after the word "Eclipse" and below the word "day". Someone had poorly erased the old text and added new text.
    • The Sun is never that large in the sky.
    • If the Sun is behind the pyramid, then why is the front side lit up? Even the clouds show the sunlight on the the wrong side.
    Artists for these kinds of fakes usually start with an existing picture and then alter it. I did a search for the pyramid image but couldn't find it. What I did find were a huge number of viral copies.

    analysis.php?id=c383b31ff7a1711fba0c73e52a82095adc276bb3.1282921&fmt=orig&size=600

    These include sightings from Instagam , LinkedIn , Facebook , TikTok , the service formally known as Twitter, and many more. Everyone shared the photo, and I could not find anybody who noticed that it was fake.

    Ideally, we'd like to find the source image. This becomes the "smoking gun" piece of evidence that proves this eclipse photo is a fake. However without that, we can still use logic, reasoning, and other clues to conclusively determine that it is a forgery.

    Looking Closely

    Image forensics isn't just about looking at pixels and metadata. It's also about fact checking. And in this case, the facts don't line up. (The only legitimate "facts" in this instance is that (1) there is a Mayan pyramid at Chichén Itzá in Yucatán, Mexico, and (2) there was an eclipse on Saturday, October 14.)
    • The Moon's orbit around the Earth isn't circular; it's an ellipse. When a full moon happens at perigee (closest to the Earth), it looks larger and we call it a "super-moon". A full moon at apogee (furthest away) is a "mini-moon" because it looks smaller. Similarly, if an eclipse that happens when the Moon is really close to the Earth, then it blocks out almost all of the Sun. However, the Oct 14 eclipse happened when the Moon was further away. While the Moon blocked most of the Sun, it did not cover all of the Sun. Real photos of this eclipse show a thick ring of the Sun around the Moon, not a thin ring of the corona that is shown in this forgery.
    • I went to the NASA web site , which shows the full path of the total eclipse. The path of totality for this eclipse did go through a small portion of Yucatán, but it did not go through Chichén Itzá . At best, a photo from Chichén Itzá should look more like my photo: a crescent of the eclipse.
    • At Chichén Itzá, the partial eclipse happened at 11:25am - 11:30am (local time), so the Sun should be almost completely overhead. In the forgery, the Sun is at the wrong angle. (See Sky and Telescope's interactive sky chart . Set it for October 14, 2023 at 11:25am, and the coordinates should be 20' 40" N, 88' 34" W.)

      analysis.php?id=52afda40f7cb9d43e788df64fd2d71b5dbb0e34d.254208&fmt=orig&size=400

    • Google Maps has a great street-level view of the Mayan pyramid. The four sides are not the same. In particular, the steps on the South side are really eroded, but the North side is mostly intact. Given that the steps in the picture are not eroded, I believe this photo is facing South-East (showing the North and West side of the pyramid), but it's the wrong direction for the eclipse. (The eclipse should be due South by direction and very high in the sky.)
    • Google Street View, as well as other recent photos, show a roped off area around the pyramid. (I assume it's to keep tourists from touching it.) The fencing is not present in this photo.
    • The real pyramid at Chichén Itzá has a rectangular structure at the top. Three of the sides have one doorway each, while the North-facing side has three doorways (a big opening with two columns). In this forgery, we know it's not showing the South face because both stairways are intact. (As I mentioned, the South-facing stairwell is eroded.) However, the North face should have three doorways at the top. The visible sides in the photo have one doorway each, meaning that it can't be showing the North face. If it isn't showing the North side and isn't showing the South side, then it's not the correct building.
    There was one other oddity in this fake eclipse photo: the people. The forgery photo shows a lot of people. However, you can't make out any details about them. Except that they are all dressed in dark clothing and nobody is standing on the lawn. If you ever see a real photo of tourists, you'll notice that there are lots of different colors of clothing. And a crowd of people at a major event like this? People will definitely be standing on the lawn. In addition, there are no telescopes or cameras. (If the people are there for the eclipse, then why are they not watching the eclipse?)

    I can't rule out that the entire image may be computer generated or from some video game that I don't recognize. However, it could also be a photo from something like a museum diorama depicting what the pyramid may have looked like over a thousand years ago. (Those museum dioramas almost never have people standing on the miniature lawns.)

    In any case, the eclipse was likely added after the pyramid photo was created.

    Moon Shot

    While I couldn't find the basis for this specific eclipse photo, I did see what people claim is a second photo of this same eclipse at the same Mayan pyramid. I found this version of it at Facebook , but it's also being virally spread across many different social media platforms.

    analysis.php?id=0bedb7d5fb38cb3e3f2cc75d890b4ab53df1d6d7.23371&fmt=orig&size=600

    Now keep in mind, I've already debunked the size of the Sun, the totality of the eclipse, and the angle above the horizon. This picture also has the same problem with the wrong side of the pyramid being in shadow. Moreover, it contradicts the previous forgery: it shows the eclipse happening on the other side of the pyramid, no people, and different cloud coverage at the same time on the same day.

    With this second forgery, I was able to find the source image. The smoking gun comes from a desktop wallpaper background that has been available since at least 2009:

    analysis.php?id=377ca5d013e7f88739491ca2591fad06f53ae9bf.144797&fmt=orig&size=600

    In this case, someone started with the old desktop wallpaper image, gave it a red tint, added clouds, and inserted a fake solar eclipse.

    Total Eclipse of the Art

    It's easy enough to say "it's fake" and to back it up with a single claim (e.g., wrong shadows). However, if this were a court case or a legal claim, you'd want to list as many issues as possible. A single claim could be contested, but a variety of provable inconsistencies undermines any authenticity allegedly depicted by the photo.

    The same skills needed to track down forgeries like this are used for debunking fake news, identifying photo authenticity, and validating any kind of photographic claim. Critical thinking is essential when evaluating evidence. The outlandish claims around a photo should be grounded in reality and not eclipse the facts.

    Značky: #FotoForensics, #Network, #Forensics

    • chevron_right

      Tracking Proof of Life

      pubsub.slavino.sk / hackerfactor · Saturday, 14 October, 2023 - 16:14 edit · 10 minutes

    In my last blog entry , I mentioned about helping other people with their computers. While I didn't mention anyone's age, a lot of the feedback has been related to elderly relatives. (Only some of the people I tried to help were elderly.) These comments led to some very interesting discussions about monitoring the elderly.

    Whether it's your parents, grandparents, distant relatives, or nearby neighbors, we all know someone who is elderly. Unless you live with them, you probably check up on them during the occasional visits, while on walks around the block, or through phone calls, emails, texting, and social media apps. I know many people (myself included) who have scheduled weekly calls to check in and chat. The problem is, if we don't hear from them, we just assume they are busy. We usually don't get concerned until after days or weeks pass.

    My biggest fear is to learn that someone had an accident, like falling, and remain unnoticed for days. I remember reading an article about a man who died and nobody noticed for 7 years. That's when his auto-pay bank account ran out of funds for utilities. Another deceased person went unnoticed for 8 years .

    I've been chatting with friends about different kinds of "proof of life" monitoring. Personally, I don't want to install a camera in every room of someone's house. That's too invasive. But at the same time, friends and family should know that the person is moving around and doing the expected day-to-day things. An alert should be triggered whenever the daily routine is disrupted. My friends and I have come up with a few solutions.

    Solution #1: Panic Button

    Devices like Life Alert, LifeCall, Lifeline, and invisaWear are wearable devices that you can press if you have an emergency. However, they are large, bulky, and unflattering to wear. The button also don't work if you can't reach it or are unconscious.

    The TV commercials for these devices always show a happy elderly person receiving the device, and then using it while in distress. I'll tell you from first-hand experience: I'd rather have dental surgery than try to convince an elderly person to carry the device around every day, "just in case they need it."

    A few decades ago, these panic buttons were good solutions. But there are much better options today.

    Solution #2: Apple Watch

    The Apple Watch includes a fall detector. If you fall, it sounds an alert. If you don't stop the alert, then it uses the built-in cellphone to call for help.

    The Apple Watch is a great (but expensive) out-of-the-box solution. No technical programming needed and it's easy enough that even my non-techie elderly friends can use it. As an emergency monitoring system, this is a bare-minimum solution. However, it has some serious limitations. For example:
    • If you fall, then it must be a hard fall. If you hit a sofa, slide down a wall, or partially catch yourself, then it won't detect the fall.
    • If you fall and land on the watch, it could be damaged. (For example, falling onto hard concrete can break the watch.) A broken Apple Watch won't call for help.
    • If you're not wearing the watch, then it won't detect the fall. This includes falling while getting out of bed, slipping in the shower, or not wearing the watch while it is charging.
    • Speaking of charging... The watch needs to be regularly recharged. If you forget and it loses power, then it won't help you.
    • The watch needs cellular connectivity to call for help. One of my friends has a cellular blind spot in the kitchen, between the refrigerator and the oven. (If you want good reception, move away from the kitchen.)
    Let's say it's a soft fall and you are conscious. You can always use the phone to dial for help, right? Well, not necessarily. What if you broke one or both arms when you hit the ground? Then you can't easily touch or navigate the watch's interface. If the user has "Hey Siri" enabled, then it may not register when your voice has a lot of stress. (It doesn't recognize screaming.)

    The worst case? A soft fall as you lose consciousness. The watch is no help here.

    Solution #3: Daily Pattern Monitoring

    Rather than watching for a life-impacting event, I've been thinking about detecting the absence of an event. For example, when I travel, I always have my laptop with me. If I get to the hotel too late at night, I might not call home (I don't want to wake anyone up). However, I am guaranteed to check my email when I get to the hotel. I've recently configured my laptop to trigger a "proof of life" URL every time it goes online. This way, my friends and family who want to know if I made it safely can always check to see if my laptop was turned on.

    Similarly, I have a couple of elderly friends who always check their computers. I've recently modified their Windows configuration to trigger a proof-of-life URL anytime the computer is logged in (including when screensaver is deactivated). If they don't use their computer at least once a day, then it will trigger an alert.

    To do this, I just needed to create three things: (1) a VisualBasic script to trigger the proof-of-life URL, (2) a task scheduler event that runs the script when there is a login event, and (3) a receiving service that looks for missed events.

    Keep in mind, these are the steps I used. I wouldn't be surprised if there was a better or easier option.
    1. Open the Command Prompt. This will default to your home directory (C:\users\ name \). Create a directory called 'Scripts' for holding the script to call when the event is triggered. ( mkdir Scripts )
    2. Create the script. I used 'Notepad' to create a simple visual basic script. This script calls 'curl' to trigger a URL that will record the activity. I saved the script as \users\ name \Scripts\Proof-Of-Life.vbs (the suffix ".vbs" is important.) Here's the source code:
      Set oShell = CreateObject("Wscript.Shell")
      Dim strArgs
      strArgs = "cmd /c curl -A Eddie+Desktop https://server/life-track.php"
      oShell.Run strArgs, 0, false
      Change the URL to point to your own web server, and set the user-agent string (-A) to identify which computer is doing the reporting. The script's 'run' parameters will trigger the tracking URL silently (without having a terminal window briefly popup). In effect, the user won't notice that this happened.

      To verify that you did this part correctly, you should be able to open the File Folder, navigate to \users\ name \Scripts\, and run the Proof-Of-Life.vbs file by double-clicking on it (or right-click and select "Open" from the menu). If everything works correctly, nothing will appear to happen on the desktop. However, the remote web server will see a request for "/life-track.php" coming from the computer.
    3. Open the Windows 'Task Scheduler'. (Go to the Start menu and just type 'Task Scheduler'. It will appear at the top of the list.) This is where things get complicated. For tracking logins, you will select "Create Task" and then fill in the tabs:

      • General . Give this task a name and description. I called mine "Proof of Life Login". (Caveat: You can't change the name. To change it, delete the task and recreate it with the new name.) Select 'Run only when user is logged on". You don't need to change any of the other default values.
      • Triggers . Select "New" to create a new trigger. From the top drop-down menu for "Begin the task", select "On workstation unlock". I also configured it to "Stop task if it runs longer than" 30 minutes. (It should only take a second to run.)
      • Actions . Select "New" and "Start a program". Select your "C:\users\ name \Scripts\Proof-Of-Life.vbs" program.
      • Conditions . I uncheck the Power settings since I want it to run no matter regardless of whether it's on battery or AC. I also selected the Network with "Any connection".
      • Settings . This vbs program should take a second to run. For "Stop the task if it runs longer than", select the minimum time: 1 hour.
    With all of these entered, click "OK". Now you should be able to activate the screensaver (Win-L to lock). When you unlock the screensaver, it will immediately trigger the tracking URL.

    On the server side, I created a life-track.php script that (1) validates the user based on the User-Agent string value, and (2) logs the information in an SQLite database. I record the user, date and time, and IP address. I also have a cronjob that checks the SQLite database every few hours to make sure that the user triggered the URL. If no user was seen, then it sends an alert email to me. (My first response will be to check if they are supposed to be home. Then call them, and if that fails, then issue a welfare check on them.)

    As far as privacy goes, the only people who knows about the data are myself and the person who said I could monitor them. The monitoring is also not intrusive: I don't know what they are doing at the computer, or even how long they are on the computer. I only know when a proof-of-life was last observed. If the person is injured or missing, then I'll know within a few hours. Worst case, they will be on the floor for up to 16 hours (night check through next morning check), but that's much better than having nobody know there's a problem.

    Alternate Use

    For my laptop, I have a similar Windows Task called "phone home". It runs automatically whenever the laptop connects to any network. For this script, I used a custom event trigger:
    • Begin the task: On an event
    • Log: Microsoft-Network-NetworkProfile/Operational
    • Source: (leave blank)
    • Event ID: 10000 (that's when the network is up)
    When I get to the hotel, I use my laptop to connect to the WiFi, and it automatically triggers a proof-of-life.

    (This has the added benefit of tracking my laptop if it is ever stolen. If the thief turns it on and connects to any wireless network, it will immediately and silently call home.)

    Configuring Windows Tasks is not intuitive. There are tons of event names and numeric identifiers, and very little documentation. A good start is to look in the "Event Viewer". Every event is logged and lists both the log file and the numeric code.

    More Options

    Looking for a change in the daily pattern of life is a great option for monitoring someone's welfare. If they end up falling or being incapacitated, then they may be down or hurt for a few hours, but it won't be for days or weeks before someone notices.

    Besides tracking login access and network connectivity, there are other great uses for these types of monitors. For example, I have one Windows computer that I only use with one client. I can use these triggers to monitor both start and stop times, allowing me to automatically track my billable hours. (Why estimate to the nearest 15 minutes when I can see the exact times that the computer was in use?)

    The tracking doesn't even need to be a global system event. I showed this to one of my coworkers and they immediately put in a tracker around their social media apps. The event contacts the tracking server each time the program starts and stops. Now they know exactly how much time they are wasting online.

    The tracking URLs don't even need to be accessible over the internet. I could have my script contact a local embedded device, like a Raspberry Pi, Arduino, or ESP32, that runs a simple web server. The micro computer can then trigger some event or activity. Personally, I might make one that beeps every hour, so I remember to get up and move around a little. (It's not healthy to sit in one place for hours.) Or maybe have it automatically adjust the room lighting and temperature when it sees that I'm working.

    The Dark Side of Tracking

    While these technologies can be used to track the welfare of elderly friends, they can also be abused. A stalker with access to your computer can use this technique to monitor when you are at the computer. Employers could use them to determine when you are not working.

    Fortunately, you can use the Task Scheduler to see what other tasks are currently on the system. If you see an unexpected task, you can easily disable or delete it.

    On my own systems, I noticed that Google and Microsoft both added event tasks to check for updates. I modified those so that they only run on my home network. (Woo hoo! No more "auto update" while giving a presentation at a conference!) Personally, I don't care how high the risk is that the patch wants to fix; I don't want updates when I'm traveling. When I'm on the road, the risk from a malicious or failed update is almost always worse than the problem being patched.

    Značky: #Privacy, #Network, #Programming