Deciphering AI's impact on society: an open discussion with AI and blockchain expert Kitty Horlick

Kitty Horlick, guest speaker at Emergent/04

// Daniel Jason

// Daniel Jason

Director at Material Impact Marketing Communications

As society’s desire to harness generative AI grows, so do the personal, business, legal and wider societal issues to grapple with.

Yesterday evening at the fourth Emergent event, keynote speaker Kitty Horlick, a pioneer in web3, blockchain and metaverse technologies, captivated a 30-odd audience with her perspective on generative artificial intelligence (AI), blockchain technology and the implications for business and wider society.

Generative AI has dazzled the world with its capacity to generate content, including images, text and code. This is a relatively new technology and the internal workings of generative AIs, such as ChatGPT, are obscured – its programmers do not know why it outputs what it does, over and above any other output. Kitty explained that these generative models work by forming associations between disparate elements and do not operate like the human brain, though to an outsider its responses may feel very human like. “It’s quite easy to feel it operates like a human mind and there are some similarities like neural networks, but AI works on computation, and all it’s outputs are based on statistics. Language models do not actually understand language or the meaning of the words it outputs. Instead, it translates inputs into numerical data by breaking the language down into tokens, which are numbers, and then referring those into its neural network. GPT then uses statistics to predict the most likely first token which follow the user’s prompt. Then it calculates the most likely token to follow the prompt, plus the first token, and on and on.”

OpenAI launched ChatGPT in November 2022, arguably before its team fully understood its inner mechanisms. This proactive approach was motivated by the idea that if a fully developed model was unleashed on the public before they had time to get to grips with it, disaster could ensue. Instead, the iterative deployment approach that OpenAI has adopted allows them to tweak the model in accordance with the needs of the public. Moreover, if one organisation had an upper hand in understanding these models, it would afford an unhealthy monopoly on the understanding of AI to a single organisation, such as OpenAI. Despite this, however, the release of ChatGPT also set off a competitive race among firms, each vying for AI dominance. Those such as Google’s ‘AI Godfather’ Geoffrey Hinton has voiced concerns that the intensity of this race is leading companies to abandon certain safety checks and balances.

Stage is set at 180 Studios located at The Strand, London

Keynote speaker Kitty Horlick, a pioneer in web3, blockchain and metaverse technologies, captivated a 30-odd audience with her perspective on generative artificial intelligence (AI), blockchain technology and the implications for business and wider society

Training such models is a data-intensive task; the more and better the data, the higher the quality of the model and its outputs. Interestingly, finely tuned models trained on high-quality, specific data are starting to perform comparably to larger models like ChatGPT.

AI’s utility extends across a wide array of industries, including coding, design and advertising. One startup is employing generative AI to augment the efficiency of call centres by processing accumulated data. Other applications include building training databases for smaller language models. 

Despite the promising possibilities, AI also has two main drawbacks that make it a potent risk for disinformation. The first is that it is susceptible to ‘hallucinations’, generating false information inadvertently. The quality and nature of the data used to train the model significantly influence its outputs, and this principle applies irrespective of the application. The second drawback is the intentional use of generative AI to create false information, like deepfake photos of an explosion at the Pentagon, and distribute them as propaganda. AI’s images and text are very convincing, and whilst they can fool a lot of people now, these models are not yet perfected and it’s possible to spot inconsistencies in images and text, like how the fence around the Pentagon melds into the crowd barriers. However, whilst people can spot these inconsistencies now, as the models are perfected, this won’t always be the case. 

AI also carries substantial legal and ethical concerns. It is challenging to guarantee the outcomes of AI products due to their inherent unpredictability. Additionally, Kitty explained how “AI models may harbour biases, as evidenced by GPT’s politically left-leaning outputs.” The consequences of this could be particularly severe when AI is being used in biomedicine. There are fears that if, for instance, AI is not fed enough data from patients from minority ethnic groups that it will struggle to diagnose them accurately. There has already been a precedent for this with facial recognition software which has historically struggled to distinguish facial features amongst non-white ethnicity groups. Companies also run the risk of potential lawsuits if their AI models inadvertently infringe copyrights, as in the ongoing court case between Getty Images and generative photo AI, Stable Diffusion which had generated “new” images with Getty watermarks, revealing that its training data included copyrighted Getty images. 

AI regulation has become a hot topic, with no solid approach laid out even after a Senate inquiry involving US AI company leaders like OpenAI’s Sam Altman. However, one of the key recommendations from this inquiry was that the data sets used to train AI models should be made transparent. AI companies can currently keep their precise training data a commercial secret, but knowing what a model has been trained on makes it much easier to understand why it has given certain outputs – and therefore provides something to trace back and potentially regulate. The EU is writing a bill at the moment that Sam Altman says might cause OpenAI to cease operations in Europe, as it demands transparency over ChatGPT’s training data. Meanwhile, it’s expected that US regulation will go in a different direction, so we may see regulatory divergence across the world, explained Kitty.

Audience Q&A session over dinner with Kitty Horlick.

Generative AI has dazzled the world with its capacity to generate content, including images, text and code.

Meanwhile, blockchain technology may hold some of the solutions the public needs to protect itself from a post-truth, AI deepfake world. Its reputation has been tarnished by cryptocurrencies, but at its core blockchain is a tamper-proof ledger, storing data blocks verified by over 50% of network participants. Each block contains a unique serial number, or ‘hash’, and any attempt at altering the data alters the hash, rendering it invalid in the overall blockchain. Further, in order to hack the chain, an entity would have to take control of over half of all the validators in the blockchain, which could be many tens of thousands. Both of these features make it extremely difficult to tamper with the blockchain.

Blockchains inherent immutability and transparency make it an ideal technology for ensuring data transparency in AI, an essential feature for AI safety. Any data stored in the blockchain can be taken to be correct, and could potentially provide a public infrastructure to provide a single source of truth, to verify identities against in order to prevent impersonation. Companies like Chainlink are discussing highly innovative uses of blockchain technology to challenge disinformation and deep fakes.

Kitty Horlick’s talk was a refreshing exploration of AI and blockchain and the synergies between them, revealing both the opportunities and challenges at this fascinating intersection. Despite the numerous risks, she remains optimistic. She believes that by discussing these issues openly, the risks are more likely to be mitigated, and these technologies used responsibly for the betterment of society.

For now, the question is very open: will these technologies have a negative or positive impact on society? As we delve deeper into this new era of AI and blockchain, the answer will unfold in time.

——

Material Impact Marketing Communications makes brands more visible and valuable by helping firms tell their stories with impact. We specialise in PR and marketing services for asset managers looking to build, raise or protect their public profile. Our team provides comprehensive support, including media relations, brand development and thought leadership strategy. 

Kitty Horlick’s talk was a refreshing exploration of AI and blockchain and the synergies between them, revealing both the opportunities and challenges at this fascinating intersection.