US States Are Pushing For Device-Level Age Verification, But It Doesn't Solve The Problem
Over the past couple of years, we have seen an increased push for age verification across a number of applications and online platforms. The push has been so concentrated in some states that thousands of users in the United States have flocked to VPN services to avoid providing verification to access online content. And while there are certainly some arguments for requiring age verification when it comes to protecting teens and kids from explicit online content, there are also a lot of pitfalls in how companies have handled this data in the past.
The biggest problem with age verification services is how the private data required to prove your age might be handled. Most recently, we have seen this problem showcased through the community's concerns following Discord's push to require age verification on all accounts, though the company has recently walked back on its plans to force a global rollout of that system in the immediate future. And while that is good for Discord users, other platforms continue to push age verification, like Spotify, which could potentially deactivate accounts it deems as not old enough. Even Google is using AI to determine people's age. Thankfully, a couple of states have started to look for other ways to approach this issue, going beyond simply requiring apps themselves to verify age. Instead, California and now Colorado want to verify a user's age at a device-level.
Sounds like a win, doesn't it? In some regards, it absolutely is. However, it still doesn't address the elephant in the room: having to give away your personal information just to access content online.
The real problem with age verification systems
The actual problem with age verification systems isn't just the whole shtick of "you're limiting our freedom to access content online." There might be some out there with that opinion. However, for most of us — myself included — the biggest issue with age verification systems is the fact that they often require me to give out personally identifiable information to prove how old I am. This could include anything like credit card information, a facial scan to check your age, or even a picture of your driver's license or ID.
Furthermore, many of these systems that are coming out now are designed around utilizing AI to help do the heavy lifting. This is especially prevalent in systems like Discord's, where the company plans to use AI-powered facial scans to verify users' ages. Giving that much information to AI is a scary thing for many, especially as we are already seeing loads of deepfakes being used in AI scams. As such, it's easy to jump to the conclusion that giving a scan of your face to an AI could possibly lead to more. This becomes an even riskier proposition if the company hasn't done anything to earn your trust when it comes to how they handle your private data.
This is a big part of what led to Discord's decision to slow its global rollout of age verification, as members of the community came together to lash out against the change and how a partner of Discord's had handled private data in the past.
There's more at stake than just personal information
While personal data and the safety of that information are among the most pressing problems, there are also a slew of other possible dangers associated with age verification. These issues can be as simple as the lack of IDs for some users. There's also the fact that reports suggest there have been potentially higher error rates in communities of color, as well as other groups of users, such as members of the LGBTQ+ community, who rely on the internet for important care and connectivity with others.
There are also concerns that forcing age verification can completely nullify online anonymity, which has always been one of the biggest pros and cons of the internet. Furthermore, some believe that by forcing teenagers and children to require parental consent to access certain types of content, age verification rules might completely cut them off from the care they might need, such as mental health assistance. There's also the concern that it could cut off the ability for homeschooled children to get up-to-date and important information about the world around them, as well as their schooling information. The big concern here is that all the content the teenagers access would be filtered through what the parent deems "okay." On one hand, this can be good for younger children, but some worry it could prove harmful for older teens, depending on the context of the situation.
Is on-device verification the answer?
When you look at the whole picture, on-device verification does sound like a nice solution. However, it still has some possible issues. For starters, it doesn't do away with the need to give up personal information completely, as you will still need to verify your identity on the device. Additionally, depending on who the manufacturer of your device is, trusting them with that information might be a grey area for some. While some companies have become well-known for their approach to user privacy, there are constantly new exploits and malware popping up that are aimed at user devices.
So, while on-device verification would limit who you had to share your information with, we would still need to verify how other applications use the information that the device is passing along. Device manufacturers would also need to create a secure place for personally identifiable information to be saved on the device — or better yet, just not store it at all beyond the initial verification process.
There is, ultimately, a lot of possibility for on-device verification to work. But, at the end of the day, it still requires the user to trust a company with their personal information. With all the new AI features some of these companies are building into the devices, that might be a harder ask for some than others.