Taming Silicon Valley

Navigating the Complex World of Artificial Intelligence

The rise of Artificial Intelligence (AI), particularly generative AI, has become a prominent topic in contemporary discourse, with profound implications for society. This book explores the multifaceted nature of this technology, addressing key questions about its origins, applications, impacts, and the path forward.

What is Generative AI?

At its core, Generative AI is a specific approach to AI that leverages vast amounts of data to make predictions about language use on which new content is based and generatated. It uses large datasets to statistically predict what humans might say in a given context. A generative AI model predict what words someone would type next in a sentence, or generate an image based on a text description. Large language models (LLMs), often referred to as foundation models, are the most well known examples of generative AI. These models are given a prompt and they generate responses based on statistical probabilities of the training data. The underlying processes are not fully understood, and it is often called a "black box". While the technology can produce seemingly correct answers, the mechanisms behind the responses are not always clear, which means that these systems can produce unpredictable and erroneous results.

When Did Generative AI Emerge?

While the concepts behind AI have been around for decades, generative AI has seen substantial growth in recent years. This is because of the availability of large datasets and computing power required for training such systems. The social media era played a significant role by creating a business model of surveillance capitalism, where personal data is collected and used to target ads. This collection of vast amounts of data is key for the training of Generative AI systems. The technology has been in a state of rapid evolution with its influence significantly impacting our lives.

Where is Generative AI Used?

Generative AI is used across numerous sectors, from creative arts to business. Generative AI is found in various applications including chatbots, image generators and more. In business, generative AI can create personalized marketing content. In creative arts, generative AI is being used to create images, music, and videos. It is also being used in academia and education, for example to personalize learning. The ubiquity of AI tools like ChatGPT demonstrates the reach of the technology. Generative AI is also increasingly being used in areas of disinformation and market manipulation.

What are the general risks of generative AI?

Generative AI has the potential to reshape our world significantly, both for better and worse. As for the worse bit, firstly, these systems can produce content that is factually incorrect and even nonsensical. This is because they are based on statistical predictions of words and not a true understanding of the world. The models can generate misinformation, often mixed with truths, and are unable to verify or check facts. This makes them dangerous in terms of propagating disinformation. Secondly, the systems may promote bias and discrimination as they are trained on datasets that may have existing biases, and are not capable of moral judgement. Moreover, there are concerns about the labor practices of some of the tech companies who use "digital sweatshops" for human feedback on which the models depend. Further, the technology might also be used to manipulate markets. Fake viral images of Pentagon building being bombed tanked stock market. AI systems are also seen as potential tools that could undermine democracy. Fake images, videos and sound recordings have recently influenced the election outcomes. Democracy is based on shared reality or common truth. If reality is difficult to discern or truth is different from one person to the other, democracy cannot function.

How Does Generative AI Work?

Generative AI models are built by feeding them massive amounts of data so they can learn patterns. The models are then able to predict outcomes based on statistical probabilities. These models don’t "understand" the concepts they use or the people they describe, and therefore cannot distinguish between fact and fiction. The models generate new content by piecing together bits of the training data, often without understanding the context. They record the statistics of words, but lack the ability to reason or fact check. This means that while they can produce text that sounds natural, the content is not always correct or truthful. This can be seen when they produce outputs like "The sun peeked through the clouds casting a warm glow on the grassy hillside" in response to a prompt asking for rhymes with "some".

Big tech and Generative AI

The development of Generative AI is led by big tech companies who are investing large sums into this technology. These companies are often driven by profit and are focused on increasing their market position, which makes them less likely to consider potential negative consequences. The rush to the frontiers of technology development for profit has meant that there is little focus on safety. There are, however, others who are concerned about the ethical and social implications of the technology. These individuals call for more regulation and transparency. There is also an emphasis on the role that citizens should play in making sure that technology is used ethically and responsibly. There is a call for independent scientists to be included in the oversight of AI, rather than the government or the companies themselves.

The Way Forward: Responsibility, Transparency, and Education

The current path of AI development is problematic. There is a need for transparency and accountability. The book suggests the need for data, algorithmic, source, environmental, labor and corporate transparency. These should work the same way as nutrition labels for food products so consumers can decide for themselves whether to buy it. The companies that develop the models need to be held responsible for harm that might be caused.

Furthermore, there is an emphasis on the need for education. Every citizen should be aware of how AI works, what it can and cannot do, and their legal rights if they are harmed by AI. There is a recognition that education is necessary if people are going to be able to use AI safely and responsibly.

The book also suggests that to move forward effectively, multiple stakeholders must be at the table to ensure that AI is developed responsibly. This includes civil society, government regulators, and independent scientists. A key shift is needed in how we think about this technology - rather than focusing purely on automation (which would lead to mass unemployment), we should instead aim to augment human capabilities.

While Generative AI presents significant challenges, they are not insurmountable. By focusing on responsible development, rigorous evaluation, and broad education, society can steer AI towards a more beneficial path. The book urges us not to be passive recipients of technology, but rather to be active agents in shaping the world in which it exists, ensuring that progress benefits all.

Chankhrit Sathorn