Don't fall for the bring-your-own-AI trap

Commissioned Generative AI adoption within organizations is probably much higher than many realize when you account for the tools employees are using in secret to boost productivity.

Such shadow AI is a growing burden IT departments must shoulder, as employees embrace these digital content creators.

Seventy-eight percent of employees are "bringing their own AI technologies" (BYOAI) to work, according to a joint Microsoft and LinkedIn survey. While the study acknowledges that such BYOAI puts corporate data at risk it downplays the sweeping perils organizations face to their data security.

Whether you call it BYOAI or shadow AI this phenomenon is potentially far worse than the unsanctioned use of cloud and mobile applications that pre-dated it.

As an IT leader, you'll recall the bring-your-own-device (BYOD) trend that marked the early days of the consumer smartphone 15 years ago.

You may have even watched in horror as employees ditched their beloved corporate Blackberries for iPhones and Android smartphones. The proliferation of unsanctioned applications downloaded from application stores exacerbated the risks.

The reality is that consumers often move faster than organizations. But consumers who insist on using their preferred devices and software ignore integrating with enterprise services and don't concern themselves with risk or compliance needs.

As risky as shadow IT was, shadow AI has the potential to be far worse - a decentralized Wild West or free-for-all of tool consumption. And while you can hope that employees have the common sense not to drop strategy documents into public GPTs such as OpenAI, even something innocuous like meeting transcriptions can have serious consequences for the business.

Of course, as an IT leader you know you can't sit on the sidelines while employees prompt any GenAI service they prefer. If ignored, Shadow AI courts potentially catastrophic consequences for organizations from IP leakage to tipping off competitors to critical strategy.

Despite the risks, most organizations aren't moving fast enough to put guardrails in place that ensure safe use, as 69% companies surveyed by KPMG

Deploy AI safely and at scale

Fortunately, organizations have at their disposal a playbook to implement AI at scale in a way that helps bolster employees' skills while respecting the necessary governance and guardrails to protect corporate IP. Here's what IT leaders should do:

Institute governance policies: Establish guidelines addressing AI usage within the organization. Define what constitutes approved AI systems, vet those applications and clearly communicate the potential consequences of using unapproved AI in a questionable way.

Educate and train: Giving employees approved AI applications that can help them perform their jobs reduces the incentive for employees to use unauthorized tools. You must also educate them on the risks associated with inputting sensitive content, as well as what falls in that category. If you do decide to allow employees to try unauthorized tools, or BYOAI, provide the right guardrails to ensure safe use.

Provide use cases and personas: Education includes offering employees use cases that could help their roles, supported by user "personas" or role-based adoption paths to foster fair use.

Audit and monitor use: Regular audits and compliance monitoring mechanisms, including software that sniffs out anomalous network activity, can help you detect unauthorized AI systems or applications.

Encourage transparency and reporting: Create a culture where employees feel comfortable reporting the use of unauthorized AI tools or systems. This will help facilitate rapid response and remediation to minimize the fallout of use or escalation of incidents.

Communicate constantly: GenAI tools are evolving rapidly so you'll need to regularly refresh your AI policies and guidelines and communicate changes to employees. The good news? Most employees are receptive to guidance and are eager to do the right thing.

Solutions to help steer you

GenAI models and services are evolving daily, but there are some constants that remain as true as ever.

To deploy AI at scale, you must account for everything from choosing the right infrastructure to picking the right GenAI models for your business to security and governance risks.

Your AI strategy will be pivotal to your business transformation so you should weigh whether to assume control of GenAI deployments or let employees choose their own adventures, knowing the consequences of the latter path.

And if you do allow for latitude with BYOAI, shadow AI or whatever you choose to call it, do you have the safeguards in place to protect the business?

Trusted partners can help steer you through the learning curves. Dell Technologies offers a portfolio of AI-ready solutions and professional services

Learn more about Dell AI solutions.

Brought to you by Dell Technologies.

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jul 26
Google DeepMind's latest models kinda sorta take silver at Math Olympiad

Sure, it took three days to do what teenaged brainiacs do in nine hours - but who's counting?

Jul 26
Study shock! AI hinders productivity and makes working worse

Management drank the Kool Aid but staff can't cope with new demands

Jul 26
Omnissa, VMware's old end-user biz, emerges with promise of 'AI-infused autonomous workspace'

We think this means easier-to-administer virtual desktops with extra shiny

Jul 26
Jul 25
OpenAI unveils AI search engine SearchGPT - not that you're allowed to use it yet

Launching in Beta is so 2014. We're in the pre-Beta limited sign-up era now