📖 Homo Deus: A Brief History of Tomorrow ★★★★☆
What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
This is how Yuval Noah Harari ends his book, Homo Deus. It’s a fascinating book that really sets the scene of how we arrived at the twenty-first century, and what the coming century may look like. A closing focus of the book is the concept of Dataism, where data and information flow is of paramount importance.
In many ways we’re already there. To the world at large, the data that I produce is likely more valuable than myself as a whole. My heart rate data sent to a researcher in Australia has more potential than my personal interactions while ordering coffee, and the collection of likes and friend requests on my [now deleted] Facebook profile tell corporations more about my lifestyle and habits than I could remember.
But is that the point?
It’s certainly happening all around us, and I personally don’t feel that mass data collection is a bad thing. But the concept that we could create highly intelligent algorithms that render humanity redundant is a dangerous one, and I don’t believe that Dataism itself could do it. Thankfully.
In the next century — and Harari largely agrees — our focus is likely to still remain on ourselves, and this mass data collection and flow will endlessly improve our wellbeing. Take for example human health. It’s incredible to think that I have to fall noticeably ill before someone can help me. Even my car gets better treatment than that — it would be illegal for me to drive it without having it checked every twelve months, and that check highlights not only problems right now, but also things that could become a problem. With my health, there is no such check — at least not one that is widely available without spending hundreds of pounds.
But imagine in a decade or two, your body will be automatically connected. Your metrics will be powering groundbreaking research around the world, and in return you’ll be ultra aware of your state of being. Even the slightest hint of a cold or illness and you’ll be notified of that fact. So soon that the treatment itself will often be minimal and straight forward, and you’d often be none the wiser for it. Is sharing your health with the world a fair trade for immediate diagnosis? I’d argue for it, but over time it becomes incredibly risky.
As the book continues to elaborate, the systems powering such a network will start to rely less and less on humans and more on intelligent algorithms powered by machines. As they gain data and power, they will know more about humanity than we ever could, and that amount of data could be fatal. What happens in the scenario where this machine picks up that you have the beginnings of a dangerous disease, but it’s also aware that your contribution to society is minimal, that your value to the algorithms is lower than the cost of your treatment. If we imagine that at this stage, humans aren’t controlling the algorithms — and therefore the humanist values of every individual matters are no longer centre stage — what’s stopping the algorithm from keeping that diagnosis from you? After all, you’d have no reason to doubt it until it became too late.
That scenario is what is dangerous in my opinion. There’s immense value to be harvested here, but we have to ensure that humanity remains as our single focus if we are to survive. Our data may be important, but our experiences and relationships must maintain their value to enable us to benefit.
A few years ago as a new computer science graduate, I would have been all for giving power to algorithms and putting the machine centre stage. But as I grow older, I’m finding I value people, nature and experiences so much more. I have no idea what the meaning of life is, or whether there even is one, but our individual happiness must come first, because the world is simply an incredible place when it does.