In recent discussions within the tech industry, prominent figures have voiced growing concerns about the increasing concentration of artificial intelligence (AI) development in the hands of a few powerful companies. The rise of AI, fueled by innovations like OpenAI’s ChatGPT, has ignited what some describe as an AI arms race among tech giants, with Microsoft and Google among those seeking to deploy their own AI models.
The crux of the matter lies in the immense computing power required to train these large-scale AI models on vast datasets. As Meredith Whittaker, president of encrypted messaging app Signal, pointed out, “Right now, there are only a handful of companies with the resources needed to create these large-scale AI models and deploy them at scale.” This concentration of power raises concerns about the potential influence these companies could wield over society and institutions.
Whittaker, a former Google employee, expressed her apprehensions about the derivative nature of AI, describing it as a technology rooted in centralized corporate power and control. Her concerns are shared by others, including Frank McCourt, founder of Project Liberty, an organization aiming to promote a more responsible approach to technology development.
McCourt emphasized the dominance of “basically five companies that have all the data” and warned that without changes, these platforms would be the ultimate winners. Both Whittaker and McCourt argue that the centralized nature of AI development, fueled by large language models and massive amounts of data, poses risks to user control and privacy.
The sentiment is echoed by Tim Berners-Lee, the inventor of the web, who has raised concerns about the concentration of power among tech giants. Jimmy Wales, founder of Wikipedia, while acknowledging the current leadership of tech giants in AI, sees room for disruption. He pointed to the threat posed by open-source models, where AI applications can be developed and enhanced by anyone without the need for massive resources.
The debate over AI’s concentration of power and its societal impact is gaining traction, with voices calling for a more responsible and diverse approach to technology development. As the industry navigates this landscape, questions about user data control, corporate influence, and the potential harm caused by AI technologies remain at the forefront of discussions among tech executives and thought leaders.