AI Act, Dislightenment power to BigTech -- πŸ¦› πŸ’Œ Hippogram #22

You are about to read our newsletter for health and tech professionals - the Hippogram.

I'm Bart de Witte, and I've been inside the health technology industry for more than 20 years. In that time, I've witnessed the evolution of technologies that are changing the face of healthcare, business models and culture in unexpected ways.β€Œβ€Œ

In the newsletter, I share my knowledge and insights about building a more equitable and sustainable global digital health. Sharing knowledge is also what my Hippo AI Foundation, named after Hippocrates, focuses on and it is an essential part of a modern Hippocratic oath. Know-How will increasingly result from the data we produce, so it's crucial to share it in our digital health systems.

Next week, the EU legislative decision-making (trilogues) concerning the AI Act will kick off in full swing. Trilogues are closed-door, informal negotiations in which representatives of the three main EU institutions – the European Parliament, the European Commission and the Council of the European Union – hash out a compromise on law proposals that will impact the lives of 450 million citizens. This time their decisions could even impact the lives of billions of people in the low- and middle income countries as well. In fact the will impact the future of our children as the current AI Act is counterproductive, concentrating power in unsustainable ways, and potentially rolling back the societal gains of the Enlightenment.

UN High-Level Advisory Body on AI

Only last week, I had the honor to attend the first Multi-Stakeholder High-Level Advisory Body on AI, organized by the United Nations Office of the Secretary-General's Envoy on Technology in NYC. The focus was on governing AI for the betterment of humanity, aiming to establish a safe, secure, open, and inclusive digital future. I had the honor to be personally invited by the Secretary-General's Tech Envoy, Amandeep Gill. The focus was on governing AI for the betterment of humanity, aiming to establish a safe, secure, open, and inclusive digital future.

Now, while the seminar was filled with diverse voices, one viewpoint that particularly stood out, and not necessarily in a good way, was from Suleyman. For those who may not recall, while at Google’s DeepMind, Suleyman faced a series of accusations. He was relieved of some of his management responsibilities following complaints that he bullied subordinates. These weren't baseless accusations. Suleyman openly acknowledged them, admitting in a later interview, "I really screwed up.”

This isn’t a throwback to his time at DeepMind, but a deeper dive into his latest sentiments on AI governance, which are in line with those of Sam Altman. In recent months, Altman has been engaging with state leaders, aiming to shape regulations that safeguard his business model centered around proprietary AI.

Suleyman's view of AI was more unorthodox, to say the least. He emphasized that "what is coming is far more dramatic than most people can imagine," and stoked unfounded fears about the democratization of AI capabilities. He painted a future where BigTech companies, like the ones he's been a part of, should have a leading role in AI's regulatory processes. I find this comparable to asking Big Tobacco to sit at the table to discuss global health. Yes, I highly respect his engineering expertise, and engineers might be trained to solve problems, but as AI has an impact on the whole of society, engineers should not solve this and we need an inclusion of different all disciplines to have a broader perspective then just the technology itself. Remember what happened the last time we let those tech-savvy thirty-somethings, armed with god-like technology and endless funds, to dictate the course of our world? Beyond sacrificing privacy, we also lost our ability to focus, and society became increasingly polarized.

During the UN event, Suleyman strongly advocated for immediate and almost restrictive regulations on who is allowed to engineer AI systems. He even suggested that AI researchers, should have a license. In essence, gatekeeping who gets to innovate and allowing institutions to decide whom. His assertion, "I predict this, I'm betting on it, and it's crucial for the world to recognize this" sounded all too arrogant, authoritarian, drawing parallels to Google's earlier scandals.

This doesn’t come as a surprise, as both Suleyman and Altman have been confronted with the power of open innovation. for example, after RoseTTaFold published an open and free version of AlphaFold that was more energy efficient, Deepmind had to open source AlphaFold. Similar will happen with his new venture Inflection, when he chooses to go dark and protect his technology with trade secrets.


He also mentioned that AI will not only be an outstanding tutor and assistant but also an exceptional doctor. By insinuating that AI could take on the role of a physician, coupled with advocating for licenses and opposing open-source AI, the idea of a dominant BigTech entity seeking to dominate all knowledge driven industries grows increasingly alarming. Just as pharmacists in the late 19th century saw their expertise and ownership of knowledge taken over by BigPharma, physicians today face the threat of losing their grip on medical knowledge to BigTech.

Age of Dislightement


While I fully support regulating applied, what Suleyman is suggesting is a step too far. Examining the DRAFT Compromise Amendments of the AI Act, particularly Art28b, it's concerning to see the emphasis on researchers sharing their foundational models. This includes whether they're standalone, integrated into an AI system or product, or even offered under free and open-source licenses, among other channels. The proposal further mandates that researchers establish a quality management system for document compliance before publishing any model. Such requirements threaten to stifle open science and open-source innovation. The bureaucratic overhead will deter researchers from publishing their work, thereby undermining the dynamic nature of global open-source collaboration. Or in other words, Altman, Suleyman and BigTech, want to limit the field to a select group, going against the idea of global fairness and inclusivity. It's similar to regulating the written word, and defining who gets to write qhat, reminding us of times when some opposed widespread book access fearing loss of control.

In summary, under these regulatory suggestions, we're looking at a scenario where the world's most advanced technology, which is evolving at a breakneck pace, is only accessible in its most potent form to a handful of major corporations, allowing them unrestricted use.

Throughout our vast human story, the uncertain horizon of the future often cast shadows of fear and doubt. Our ancestors, in their quest for security, would often rally behind the powerful, hoping that strength could shield them from the unknown. Many societies kept potent tools, like education and authority, in the hands of the few.

Age of Enlightenment


However, a spark of change ignited, especially in the West. A bold vision arose: What if real security lay not in the hands of the few, but in the collective strength and wisdom of everyone? Imagine a world where knowledge, the right to voice opinions, and access to cutting-edge innovations were available to all. This vision laid the groundwork for the Age of Enlightenment.

In today's world, where the ideals of liberal democracies seem almost commonplace, it's crucial to remember how precious and pioneering these ideas were. Yet, as we observe the world's landscape, it's evident that the allure of strongmen still persists. As Hermann GΓΆring observed, the allure of fear is a potent tool for manipulation.

In my upcoming newsletter, I'll delve into the necessary changes to ensure the AI Act benefits everyone, and ask for your support to create more awereness.