Google's Moral Responsibility

Earlier in the month, I attended (virtually) Ad World, a conference about advertising. Yeah, I know, not the most riveting sounding thing, but it had some topics (notably Web3) that I wanted to hear more about. And it’s a conference, and I love me a conference.

I love conferences for one simple reason: it’s a chance to think differently (and no, not in the Apple way). One thing I’ve noticed is that, over time, one gets set in a particular mindset. In effect, it’s habit: you do things a certain way because they work. It’s not laziness per se, it’s basic efficiency. Conferences are a bit of a slap to the head, a way of hearing other points of view that you might not have openly considered mostly because you weren’t looking for them. I’ve learned a great many things at a conference, merely by listening to someone with a different perspective.

I was listening to one such presentation, a particularly enthusiastic gentleman who both loved and hated Google (which, honestly, I can relate), and was trying to prepare us for the new world of Google Ads where everything is done through AI. His introduction was underlining that point, but trying to make it really clear that if you use Google, Google knows everything about you.

Of course, Google doesn’t openly talk about this. But the fact remains: they read the emails from your mother, they know when you search for a plumber, they know your spending habits (even if they don’t have access to your bank account), they know where you go (including your commuting habits), where you’ve traveled, your political alignment, and where you live.

Google, a publicly-traded private corporation, knows more about you than all the spy agencies on the planet, regardless of the details made public by Julian Assange or Edward Snowden.

I know, this has been talked about before, right? Google doesn’t do any evil, right? (Actually, no. They removed that line from their mission statement a long time ago.) The simple fact is: Google is the single largest privately-held database of information on the planet, by a sizeable margin. Facebook? Ha, no, please, give me a break. They’ve got a lot of detailed information, yes, but it’s only about those who have signed up, plus a few extras. Have you logged a single Google search? That’s a nascent profile. A few searches? That’s them starting to know your habits. Gmail account? Might as well give them your DNA profile at this point.

Okay, let’s move a bit past the scare tactic, into the really scary shit. ‘Cuz if you think they know a lot about you, imagine the things that they’re not saying.

Like, say, do they know you’re a terrorist? Y’ain’t learning how to make a bomb from a book in the library, these days. And have you started searching for things like flight plans? Airport layouts? Or perhaps London Tube Maps? Or perhaps you just really love guns, are a part of a local militia, and have a particular leaning towards the right side of the fence. Odds are, there’s profiles that should be red-flagged in Google’s system that scream “dangerous person”.

And yet, school shootings continue. Crippling attacks on infrastructure. There’s no way Google couldn’t have foreseen these events, given the data they have and models they can create. All assuming, of course, that these people – if we can call them human – aren’t going to extreme lengths to cover their tracks, which a smart person likely would … but these creatures aren’t what we would call “intelligent”, so anything is really possible.

Beyond deliberate harm from the stupid side, we can also look to health concerns. You know that Google knows exactly where flu outbreaks are, right? The moment someone starts searching for symptoms, they know. They knew were all the anti-vaxxers were during COVID, the people who believed bleach and invermectin would spare them from a government-tracking, 5G-riddled injection (I still don’t have my super-speed after three shots, by the way, I’m very disappointed). Google knows where an E. coli outbreak from infected romaine lettuce has occurred faster than the FDA will ever know.

Google probably knows where a tornado will touch down and how long it will last. Reed Timmer would kill to have that data.

So where is Google’s moral responsibility in all of this? With the information that they have, with the projections that they know, where are they in assuring the public? It’s not their requirement, to be sure – there’s no law anywhere that would force them to ante up with that kind of a warning. There’s no early warning, no siren, nothing that has them speaking up and giving people a heads-up.

And here’s where we’ve ended up with our supposedly modern society: awash in data that can feed better, more accurate modelling of an increasingly erratic and unpredictable population, if only someone would spare us from the uncertainty. Nearly all of us rely on Google already knowing the damn answer before we ask it, so why aren’t we demanding Google tell us the risk before we know it’s there?

Okay, now for the super-scary. And a counter point to the above statement, and why I would never, ever want Google to actually speak up.

Google knows when you’re pregnant. (Well, half of the population, anyway.

Invasion of privacy, yes. But it can be far, far more sinister than that. I draw your attention to the recent “leaked” reversal for Roe vs. Wade in the United States. Now, let’s consider the possibility that the government wants to know each and every pregnancy. And where they are. Where they’re going. Are they heading towards known abortion clinics? Heading to the Canadian border? Trying to cross into Mexico?

Would Google’s “moral responsibility” follow the imposed law of the land, or or would they address concerns of human rights as written by other organizations? Would they tattle on the vulnerable to save their own skins, or challenge the rights in court?

We’ve given up too much, already. We made the mistake 20 years ago, saying: “We have nothing to hide.” I know I did. Google knows everything about me, no matter what I do at this point to try revert it. There is no “delete” button big enough to erase me from history. I might be okay with that.

But not for everyone else. Do we make it a requirement that all Google users are notified of concerns or risks? Do we help the world be aware, or do we leave this in the hands of governments who have failed us all too many times, already?