Two Tech Myths That Refuse to Die...
Every workplace has its own folklore. These tales get passed around like they’re gospel truth. Some deserve to live forever. Others… need to be laid to rest.
So today, we’re setting the record straight on two of the biggest howlers we still hear at conferences, Zoom panels, and yes—even dinner tables.
One’s about cloud computing.
The other’s about AI.
TL;DR
If you’re a GC, CIO, COO, or CHRO reading this—you’ll need to think about your whole division. Thousands of team members. Hundreds of policies. One data breach away from chaos.
That on-prem server? It had its moment. That free AI chatbot? Still fun—but not safe for serious work. We’re in a new era of enterprise computing. And it’s time our policies, habits, and systems caught up.
If your organisation still uses on-prem servers, huddle with the leadership team—strategise how to retire those servers and migrate your organisation’s data and workload onto the public cloud.
Set the tone: GenAI is powerful, but the need to protect corporate data and trade secrets comes first. Budget for, and rollout secure enterprise GenAI tools. Educate your teams on why public AI tools aren’t safe for corporate use.
Create an AI Acceptable Use Policy (we’ve pasted a set of guidelines at the base of this post. (Ed. Note—Buy us a coffee one day!)
So… onto the myths… let’s start with the classic:
Myth #1: "On-Prem Servers Are Safer Than the Cloud"
❌ False. They were necessary once. But let’s not confuse nostalgia with best practice.
There’s a lock on the door. A break-glass emergency keybox mounted outside. And someone in IT (usually named Clive) swears only he knows which cable does what.
These setups made sense—ten years ago.
Back when hosting in a proper data centre was prohibitively expensive.
But they were a logistical headache. You needed:
Constant air conditioning.
Raised flooring to feed cables.
Ceiling ducts.
Multiple backup power supplies.
And at least one dramatic IT meltdown every 18 months to give Clive every reason to come in on Sunday night to save the day, before Monday’s frantic earnings call.
Unplug the wrong crossover cable (the digital equivalent of the red vs blue wire dilemma in every spy film), and bam – your corporate network flatlines. Cue chaos.
Now let’s compare that to today’s gold standard: the hyperscale cloud.
Microsoft, AWS, Google, Samsung operate datacentres built like Fort Knoxes offering:
Redundant internet and power connections
Physical security (biometric access, 24/7 surveillance, zero visitor access)
Live incident response teams
And most importantly—no interns wandering in to “just check something”.
A tech company I worked for operated enterprise-grade datacentres housing data and workloads belonging to a major government ministry (—the kind that had the ability to lock people up!).
I once worked with a provider that hosted sensitive government data. The Minister himself once asked for a tour of the data centre.
I politely declined. “I’m sorry sir, we don’t conduct tours.”
He smiled. “That’s the right answer.”
(Ed. Note.—Legit story. Still proud of that one.)
Bottom line: Cloud computing today is safer, cheaper, more resilient, and more regulatable than on-prem servers ever were.
And no offence to Clive—but modern security shouldn’t depend on who holds the key to the cupboard.
Myth #2: "It’s Fine to Use Public ChatGPT for Corporate Work"
❌ Also false. Look, we get it. Public GenAI platforms are easy. They’re fast. They’re fun. Folks have been generating Studio Ghibli images, mock-ups of an action toy figure boxes— but they are also able to use it to summarise corporate meeting minutes and generate
contract clauses.
But here’s where things get murky. 80% of Gen Z employees use GenAI at work.
Say a junior accountant in finance is working on checking through next quarter’s earnings report. It’s the blackout period.
They know they’re not allowed to talk to anyone about the numbers. But its 1am and they think “Let me see what ChatGPT makes of this spreadsheet."
After they press click, they’ve just submitted embargoed financial forecasts into the world’s largest AI training pool.
Why? Because public, free GenAI tools will retain data inputs to improve their foundational large language models. In other words, when your employees use the free tier, they’ve are not working within the trust boundary of enterprise-grade protections.
Let’s break this down...
What is a Foundational Model?
A foundational model is a large language model trained on vast quantities of data to perform general-purpose tasks: summarising, translating, writing, analysing. Think of it as the brain behind a GenAI assistant.
When your employees use a public version of a foundational model, their prompts will be retained to refine the GenAI’s brain (e.g. its knowledge graphs, vector indexes, parameters, weights and biases).
This is great for product development. Terrible for corporate confidentiality.
So What’s the Safer Option?
Enterprise-grade AI. Big tech companies offer enterprise versions of GenAI tools. Microsoft 365 Copilot. Chat GPT Enterprise (from OpenAI), Claude for Teams/Enterprise (from Anthropic), Amazon Bedrock (AWS), Gemini for Google Enterprise.
These services guarantee that:
Corporate data stays private company’s cloud “tenant” (like a private sandbox)
Company data and prompts are not used to train the public model underpinning those GenAI tools.
They meet most enterprise-grade compliance requirements (for example, the GDPR, the CCPA, ISO27001, ISO 42001, SOC2 and the PCI DSS)
(Ed. Note—check your cloud service providers technical specs and websites for details about these standards, and match them up against your client’s information security standards. If you’re not sure, reach out to SAIL— we’re happy to advise you).
Here’s what the cloud service providers themselves say:
🟢 Microsoft: “Your organization’s data is not used to train foundation models.”(Source)
🟢 Google Workspace (Gemini):“Your interactions with Gemini stay within your organization. Gemini does not share your content outside your organization without your permission. Your existing Google Workspace protections are automatically applied. Gemini brings the same enterprise-grade security as the rest of Google Workspace. Your content is not used for any other customers. Your content is not human reviewed or used for Generative AI model training outside your domain without permission” (Source)
🟢 OpenAI (ChatGPT Enterprise): "Does OpenAI train its models on my business data? By default, we do not use your business data for training our models” (Source).
The key is here, if an employee floats the idea of using ChatGPT to draft your next board memo, demand they do it using GenAI apps that reside on your corporate tenant.
If the employee doesn’t have a licence or seat to use Copilot at work (yet), and they suggest anonymising data within the memo before they paste it into a free GenAI app, its prudent to ask them to consider “Is it worth the risk?”. It doubles up the work they’ll have to do, accidents will happen, and who knows what data that platform will be ingesting.
The Bottom Line?
Your leadership sets the tone—and the standard. It’s essential to guide your teams clearly on the smart (and secure) use of cloud and AI technology.
Retire those legacy on-prem servers, shift securely to cloud computing, and make enterprise-grade GenAI tools available company-wide.
But don’t stop there. People are still learning about AI, and they’ll need your leadership. Set clear policies, invest in training, update your employee handbooks, and show them how it’s done.
If they understand why corporate safety comes first—and you give them the right tools—they’ll follow your lead.
And as for Clive and his trusty old server? Perhaps its time to invest in training Clive on how to become an AI power user, have him teaching employees how to prompt for success.
Need some guidance to get this right?
SAIL (Solutions and AI for Lawyers) specialises in helping companies navigate the AI and cloud landscape.
Whether it’s drafting a practical GenAI policy, facilitating secure transitions to cloud services, or training your teams to use AI safely and confidently—we’ve got your back.
Let’s chat. Reach us at: ✉️ khaled@sail.legal
Khaled Shivji
Founder—SAIL