I’ve been studying AI and working on ML models extensively recently. I’ve been thinking extensively about regulation, and it’s been something discussed continuously. These are my thoughts.
AI (artificial intelligence), at the moment, is completely unregulated. We rely on the producers of models and the morals of users of those models to do what is right. The problem that poses is that “right” is subjective.
You can go to an AI model and ask it to draw a person looking like Hitler, and it will likely reject you. You can ask ChatGPT any question you like, and its acceptance of that question is dictated by who?
Could they influence political agendas and limit free speech? Could those models have inbuilt biases? That’s up to the creators of the model to decide.
Could companies generating models decide to exclude data about certain groups of people? If a vegan ran a company producing language models, could they manipulate those models to be biased towards veganism?
The answers to these questions require the people who control millions in business revenue to balance revenue and growth with morality.
This technology can touch areas of your life without your permission. You may use your face to unlock your phone or fingerprint to register your arrival at work or unlock a door. Smart devices can recognize your voice. You grant permission to have that data used in a particular way, but what if it was used without your consent?
AI allows us to identify people without their consent, make predictions based on behavioral models, associate your face with purchases and advertising, and implicate you in crimes if the AI gets it wrong.
Amazon built a shop without checkouts that tracks your movement and the things you pick up. That is very cool, it reduces a huge bottleneck with the experience (checkout queues), but you consent to that, it’s not forced upon you.
If you take a flight, ride a train, drive a car, use a credit card, drink water or eat food, at some point, a regulatory body has said to those industries you must do what we say; you can’t do that if we don’t grant permission and you must operate a certain way.
In the UK, we have the Food Standards Agency, Civil Aviation Authority, Rail Safety & Standards Board and the Financial Conduct Authority, among others. Most would agree these authorities are there for the better.
AI will need the same controls to prevent businesses from prioritizing profits over privacy and growth above ethics.
Look at the gambling industry as an example; massive revenues and incentives for bookmakers caused many societal issues that forced regulators to impose spending limits, require signage and create schemes like Gamstop.
We risk similar ethical, business, safety and privacy issues.
Across the world, thousands of journalists churn out thousands of news stories a day. Accountants calculate profit and loss; authors hammer keys on a typewriter, producing books. Artists, advertising executives, and graphic designers produce creative after creative.
ChatGPT risks displacing anybody working with language or numbers, and DALL-E and MidJourney risk displacing designers. At the very least, it increases the scope for competition and creates wage pressure.
Artificial Intelligence will cause some loss of workers through physical automation. Autonomous cars, deliveries, robots etc. We need to create programs to train those displaced to find new work or to use AI within their job.
Autonomy through robotics may help reduce food costs - we have seen automated dairies and farm equipment for many years, allowing existing staff to manage more cows and acres. That may be what we will see - higher worker productivity using AI as an additional tool.
Stop and think about what happens in a recession when unemployment grows. The burden on the government to support those without jobs is very expensive, leading to higher taxes. If people lose their jobs to AI - where do they work? How do they buy food?
When we invented the steam engine and made horses redundant, the blacksmiths who changed the horse’s shoes became mechanics or launched new businesses to feed those industries - oil producers, tyre manufacturers, and tractor parts producers.
When the internet was born, it allowed people to launch online businesses and learn new things that would have been impossible before. We saw new marketing techniques and new revenue models. YouTube has enabled some companies to give away their work for free because recording and uploading to YouTube is more profitable.
The real challenge is that AI can potentially displace the jobs of a large proportion of the world’s population but with little in the way of new industries.
New industries might include: - AI Development - AI management (software updates, growth, tuning) - Maintenance (hydraulic oil on burger flipping machines need changing etc.) - Ethics/Compliance
We have little evidence to suggest where displaced people could get new jobs and careers.
Autonomous cars and tractors risk eradicating the traditional taxi driver and those becoming “artisanal” businesses. Imagine London Black cab drivers are replaced with autonomy - what do the 21,000 cabbies do?
Regulation may be created to say you can’t operate a black cab autonomously, securing the jobs of the 21,000 drivers that, however, might be temporary. If consumers decide they like a more consistent, cheaper service - even if it operates slower - those "protected" by AI regulation just see customers moving to the alternatives.
Humans are ultimately more expensive than machines to operate, and regulation will struggle to protect manual jobs once consumers decide they can get a cheaper, more consistent service elsewhere.
If AI is the tractor, what is the AI mechanic?
We must ensure that models are generated ethically, unbiased and sensitive to local laws and customs. A language model could offer responses about illegal topics in some countries, and through internet censorship, it endangers people’s freedom.
Does AI help or hinder poorer communities? AI gives countries and people with good internet access an advantage over remote or poorer communities that might find access to AI models beyond their reach.
Education must be provided globally to people on AI, but we can’t allow our future architects & doctors to ask ChatGPT to do the work. A standardized approach to these practices and agreements on the boundaries of where AI can help in these industries should be explored.
People using AI should offer transparency about when they are using it.
Any regulator must work within the industry to protect people from wrongdoing, help maintain innovation, and not stifle creativity.
Regulatory bodies cannot enforce regulations that might be too inflexible or slow to adapt. It will limit innovation, and any regulatory body must be ready to assess new developments quickly.
Legalities, compliance and business issues will arise through a lack of inconsistent standards across borders.
A large language model may draft responses involving illegal material and tell a 13-year-old about pornography or drugs. Instead, the effects of the technology should be regulated.
AI models can’t be all-serving to their creators; they should be for all and respect people’s opinions. We should not have biased models or models refusing to answer topics disliked by the creators. Any model should be used ethically, and the data produced and collected by those models should be controlled sensibly. We need to collaborate globally on this policy. AI Regulation should empower the good and balance the bad while keeping people secure and their privacy intact.