YoImagine someone arrives at the pub in a luxury sports car (coincidentally, a £1.5 million Koenigsegg coupe), stops and gets out. They enter the pub where you are drinking and begin to walk among the customers, reaching into your pocket in plain sight, smiling at you as they take your wallet and empty it of money and cards.
The not-so-subtle pickpocket will stop if you make noise and ask him what he's doing. “Sorry for the inconvenience,” says Pickpocket. “It is an exclusion regime, comrade.”
It sounds ridiculous. However, the government seems to take this approach to appease AI companies. Consultation to begin soon The Financial Times reported that AI would allow companies to extract content from individuals and companies as long as they refrain from using their data.
The AI revolution was as inclusive as it was rapid. Even if you're not one of the 200 million people who log into ChatGPT every week, or interact with its creative AI competitors like Claude and Gemini, you are undoubtedly interacting with an AI system, whether consciously or unconsciously. But the AI fire needs two sources of constant replenishment to survive and not go out. One is energy; That's why artificial intelligence companies are getting into the business of buying nuclear power plants. Another is the data.
Data is vital to artificial intelligence systems because it helps create copies of how we communicate. If AI has any “intelligence” (which is very controversial, since it is really just a fancy pattern-matching machine) it comes from the data it is trained on.
One study predicts that large language models like ChatGPT will run out of training data by 2026, and their appetite is very voracious. However, without that data, the AI revolution may stall. Tech companies know this, and that's why they're signing content licensing deals left and right. But that introduces friction, and the unofficial goal of the last decade to “move fast and break things” doesn't create friction.
That's why they are already trying to push us towards an opt-out approach to copyright, where companies have to request the use of our data until they say that everything we write, publish and share automatically becomes copyright data. AI training, not exam data. We can already see how companies are putting pressure on us about this fact: this week, X began notifying users about a change to its terms and conditions of use that will allow all posts to be used. Train is designed to rival Elon Musk's ChatGPT artificial intelligence model. Meta, the parent company of Facebook and Instagram, also made a similar change, resulting in the viral “Goodbye Meta AI” urban legend that allegedly violates legal agreements.
The reason AI companies want to lead the way is obvious: if you ask most people if they want to use the books they write, the music they make, or the posts and photos they share on social media, to train the AI, they will answer: I will say no. And then the AI revolution gets out of control. The rationale for governments wanting to implement such a change to the existing concept of copyright ownership is more than 300 years old and less than 100 years old, enshrined in law. But like many things, it seems it all comes down to money.
The government is facing lobbying from big tech companies who say this is a requirement to see the country as a place to invest in AI innovation and share the spoils. A lobby paper written by Google in support of its approach to ditching copyright suggested that “the UK will surely be a competitive place to develop and train AI models in the future.” The government has raised this issue, which already puts the opt-out approach on the table as a method of defending it, making it a big victory for the big tech lobbies.
With the amount of money surrounding the tech industry and the amount of investment going into AI projects, it's no surprise that Keir Starmer doesn't want to miss out on the potential reward. The government will not consider how to appease the tech companies who are creating world-changing technology and will try to make the UK an AI powerhouse.
But this is not the answer. Let's be clear: the UK's proposed copyright scheme will allow companies to effectively scrape our data from every post we make, every book we write, every song we create, with impunity. We have to register with each individual service and tell them that we don't want them to chew up our data and publish a bad, mixed image of us. Hundreds are possible, from large technology companies to small research laboratories.
Lest we forget, OpenAI – a company now valued at more than $150 billion – plans to abandon its founding principles of being non-profit. It has enough money in its coffers to pay for training data without depending on the benefit of the general public. Without a doubt, these companies can put their hands in their pockets better than ours. So stop playing.
Source link