Such is the case recently for Italy. Senseless? Perhaps.
Last week, the Italian Data Protection Watchdog ordered OpenAI to temporarily cease processing Italian users’ data amid a probe into a suspected breach of Europe’s strict privacy regulations.
The regulator, also known as Garante, cited a data breach at OpenAI that allowed users to view the titles of conversations other users were having with the chatbot.
There “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” Garante said in a statement Friday. Garante also flagged worries over a lack of age restrictions on ChatGPT and how the chatbot can serve factually incorrect information in its responses.
These rational concerns apply to many aspects of the open Interweb with no such bans in place, evidence that perhaps this mandate is political. And there is cause for careful consideration of how AI can be used instead broad assumptions that it is ideal in every case.
Proclamations like this one in Italy rarely move the needle in a positive direction, and political as it may seem, it’s difficult to unpack. In contrast, it isn’t challenging to unpack when it happens in a business.
A friend of mine recently told me a story - a modern-thinking and highly respected business owner in pharma data science issued a directive to all employees:
“Under no circumstance should AI of any kind be used for work.”
Nutty? Most definitely.
Setting aside the fact that AI is in use pervasively and subliminally throughout this company’s workforce, it telegraphs the CEOs tone -
My workers cannot be trusted with technology that may be flawed.
Seriously!?! That’s the message you want to send to the people who help you run your business? If your workforce is so unskilled that they cannot be given the latitude to utilize information that may conflict with their understanding of reality, the problem isn’t AI.
Unfortunately, I see this more than I want or expect. Business managers in a fast-paced industry rely on media hits designed to attract clicks. They form generalized opinions based on little research or actual findings. This is how the attention economy works; sadly, it has infected business leaders.
In today’s technology and information economy, ALL systems are flawed somehow. All systems have an element of artificial intelligence; if not baked into the app, they must lean on actual worker intelligence to perform well.
Leaders in data science should, above all, know that much of their work is based on probabilities. Yet, they fear a workforce using ChatGPT to cut corners and risk the company’s leadership standing with its customers. They take steps to eliminate their fears by simply banning its use.
In so doing, they also eliminate the possibility that AI might overcome workers' errors. Or AI may provide productivity gains that will allow workers to focus on things AI cannot do well.
AI is not perfect. But neither are your workers.
Business leaders who ban AI will trigger several unintended consequences:
Employees typically want to work for forward-thinking companies; they’ll look elsewhere.
Bans like this result in laggard adoption; competitors gain ground on leaders.
Headcount rises to accommodate greater scale; costs, management, and product complexities rise accordingly.
Workers are unable to scale without risk of additional errors.
In cases where AGI systems may present higher risks for workforce use, the remedy is AI sensitivity training. These are my four simple guideposts for leaning into AGI:
Keep reading with a 7-day free trial
Subscribe to Impertinent to keep reading this post and get 7 days of free access to the full post archives.