Website Design

Age Verification Providers Say Don’t Worry About California Design Code; You will only have to scan your face for each website you visit


from This is not a solution department

If you thought cookie pop-ups were an annoying nuisance, just wait until you have to scan your face for a third party to “verify your age” after California’s new design code takes effect.

On Friday, I wrote about the businesses and organizations most likely to benefit from California’s AB 2273, the Age Appropriate Design Code bill that the California legislature seems eager to pass (and which they call the “Kid’s Code” even though the details show it will have an impact on everyone, not just children). The bill seemed to get very little attention, but after a few of my posts started going viral, supporters of the bill escalated their smear campaigns and lies – including telling me that I I’m not covered (and when I dug in and pointed out how I am…they stopped responding). But, even if Techdirt isn’t covered in one way or another (which, frankly, would be a relief), I can still be very concerned about the impact it will have on everyone.

But, craziest of all, the Age Verification Providers Association has decided to come forward in the comments to defend themselves and insist that their members can perform age verification in a way that protects the private life. You just have to let them scan your face with facial recognition technology.

Really.

I am not joking:

First, we want to reassure you and your readers in general about anonymity. The goal of the online age verification industry is to allow users to prove their age to a website, WITHOUT disclosing their identity.

This can be achieved in several ways, but mainly by using independent third-party AV providers who do not store any of your personal data centrally. Once they have established your age or age range, they do not need (and under European GDPR law, therefore no legal basis) to hold your personal data.

In fact, the AV provider may not have needed access to your personal data at all. Age estimation based on facial analysis, for example, could take place on your own device, as could the reading and validation of your physical ID card.

First of all, I want to remind that they said “may not” need to access your personal data. Which is very different from “doesn’t” or “will not”.

Plus they insist it’s not “facial recognition” software because it doesn’t match you to a database of your identity…it just uses “AI” for to guess estimate your age. What could go wrong?

But, more to the point, they basically say “don’t worry, you’ll just need to scan your face or your ID card for every website you visit”. Normalizing facial scans doesn’t seem particularly privacy or reasonable. It sounds pretty dystopian, frankly.

We just went through this nonsense earlier this year when the IRS demanded facial scans, and it later emerged that – contrary to claims about the privacy and high quality of facial verification technology – the technology was incredibly low reliable and the provider the public claims in question about the privacy tools were false.

Honestly, this is all weird. The idea that we need face scanners to surf the internet is just crazy, and I don’t see how that benefits kids at all. (Also, does that mean you can now only surf the web on PCs with webcams? Do public libraries and internet cafes have to equip every machine with a camera?)

This morning, they’re back in the comments again, trying (and failing) to make that argument that’s nothing to worry about. When people point out that such a system can be manipulated, they have a response… “we’re just going to have you take a video of yourself saying some sentences too.” I mean what?

For some high-risk use cases, age verification may involve a liveness test where the user must take multiple selfie photos or record a short video saying vendor-requested phrases. Passive liveness technology has further reduced the effort required by the user – check that out.

They also caution against claims that you should be scanning all the time. If you’re “low risk,” they say, you may only need to have your face scanned every three months. What a bargain.

How often you have to prove that it’s still the same user who performed the verification is up to the services themselves and their regulators. Some low-risk uses may only verify every three months – higher-risk situations may verify it’s always you every time you make a purchase.

Also, they say that if Techdirt is going to publish “potentially harmful content for children” (as we’ve described, the “harmful to children” standard is never clearly defined in the bill and could easily s ‘apply to our stories about civil rights abuses, etc.), these age verification providers have a solution: just redesign Techdirt to put these stories in the “adult section”.

Unless techdirt distributes potentially harmful content for children, it is not necessary to apply an age guarantee. If any content is potentially dangerous it could be placed in a subsection of the site where adult users who want to access it would use an age check – but probably the same as they did 3 weeks ago when uploading of a new video rated 18 Game.

All of this is absurd.

Again, everything about this bill assumes that whoever provides Internet service is of no use and that every child who uses the Internet suffers. That’s not even remotely true. There are ways to fix real problems without ruining the internet for everyone. But that’s not the approach California is taking.

Filed Under: ab 2273, age appropriate design code, age verification, california, face recognition, face scans, verification

Companies: Association of Age Verification Providers

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button