'Bring Your Own AI' Trend Could Spell Major Trouble for Business Leaders

Across all industries, businesses are eager to harness the power of artificial intelligence (AI) to stay ahead of the competition. It's up to senior executives to establish the necessary guidelines and safeguards to ensure the responsible and effective use of these new technologies.
Keith Woolley, the chief digital and information officer at the University of Bristol, is at the forefront of integrating AI across one of the UK's premier academic institutions. Bristol is not only a hub for cutting-edge technology, hosting Isambard-AI, the UK's fastest supercomputer, but it's also pushing the boundaries of AI application throughout its organization.
Day-to-Day Professionals Use AI
While Bristol is keen on leveraging AI to spur innovation across its academic endeavors, Woolley shared with ZDNET how everyday staff in teaching, administration, and research are also tapping into these emerging technologies.
Just as cloud services were once adopted, Woolley pointed out that professionals are now making their own tech choices in what's known as Bring Your Own AI (BYOAI). "It's happening," he stated, noting that the widespread acceptance of cloud technology and the rush by providers to integrate AI into their services means these technologies can slip into organizations without the IT department's knowledge.
"I'm seeing it already, where departments are now building or bringing tools into the institution because every supplier that provides you with a SaaS system is sticking AI into it," Woolley explained.
Bring Your Own AI is a Growing Trend
BYOAI is increasingly recognized as a trend by other experts as well. Research from the MIT Center for Information Systems Research indicates that this trend emerges when employees use unapproved, public generative AI tools for their work.
Woolley expressed concern over this stealthy introduction of AI, whether by users or vendors, which poses significant challenges for his team and the university's leadership. "Bring your own AI is a challenge," he noted. "It's like when you used to see storage appearing on the network from Dropbox and other cloud providers. People thought they could get a credit card and start sharing things, which isn't great."
MIT's research validates Woolley's concerns, highlighting that while generative AI tools promise productivity boosts, they also introduce risks such as data loss, intellectual property leakage, copyright violations, and security breaches.
Woolley emphasized Bristol's primary concern: the potential loss of control over how AI-enabled SaaS services handle and share data. "The system could be taking our data, which we think is in a secure SaaS environment, and running this information in a public AI model," he said.
Banning Gen AI to Mitigate Risks
So, how can organizations address the rise of BYOAI? One option is for executives to ban generative AI altogether. However, MIT's research advises business leaders to remain open to generative AI and provide clear guidance to transform BYOAI into a source of innovation.
Woolley agrees, advocating for strict control over application boundaries as the best way to manage BYOAI. "The enforcement of policies is a discussion we're having inside our organization. We're getting guardrails out to people for what they can and can't do," he said.
The university is starting with an approved set of tools to curb the uncontrolled spread of AI.
Students Want to Use AI
To set the context for these guidelines, Bristol's senior executives engaged with students to understand their perspective on using generative AI in education. "The conversation went from how you would use AI for learning to enrolments, marking, and everything else," Woolley recounted.
Surprisingly, students strongly advocated for the use of AI. "What was surprising was how much students wanted us to use AI. One of the things that came out clearly from our students was that if we don't allow them to use AI, they will be disadvantaged in the marketplace against others that offer the opportunity," Woolley shared.
He compared the introduction of generative AI to the early days of calculators in classrooms. Initially, there were fears that calculators might encourage cheating, but now they're a staple in math education. Woolley predicts a similar trajectory for generative AI.
"We're going to have to rethink our curriculum and the capability to learn using that technology," he said. "We'll have to teach the next generation of students to differentiate information provided through AI. Once we can get that piece cracked, we'll be fine."
Bristol aims to integrate generative AI into education carefully, as Woolley noted, "We've been clear that AI is about assisting the workforce, the students, and our researchers, and where practical and possible, automating services."
Key Considerations
However, recognizing AI's potential is just the beginning. Woolley described the costs associated with emerging technology as escalating rapidly, much like a hockey stick. Without strict rules and regulations, these costs can skyrocket if users are allowed to bring their own AI tools.
The university's senior executives are focused on several key considerations. "The first question is, 'How much failure do we want?' Because AI is a guessing engine at the moment, and it's one of those situations where it will make assumptions based upon the information it's got. If that information is slightly flawed, you'll get a slightly flawed answer," Woolley explained.
"So, we are looking at what services we can offer. We've put policy and process around it, but that's a living document because everything's changing so fast. We are trying to drive change through carefully," he added.
In the long term, Woolley sees three potential approaches for Bristol: consuming generative AI as part of the education system, feeding data into existing models, or developing language models as a competitive differentiator. "That's the debate we're having," he said. "Then, once the right approach is chosen, I can create policies based on how we use AI."
This approach aligns with the views of Roger Joys, vice president of enterprise cloud platforms at Alaskan telecoms firm GCI. Like MIT and Woolley, Joys emphasized the importance of policy and process in safely introducing generative AI. "I would like to see our data scientists have a curated list of models that have been reviewed and approved," he said, addressing the rise of BYOAI. "Then you can say, 'You can select from these models,' rather than them just going out and using whatever they like or find."
Joys advised executives to cut through the hype and establish an acceptable use policy that helps people navigate challenges. "Find the business cases," he said. "Move methodically, not necessarily slowly, but toward a known target, and let's show the value of AI."
Related article
Creating AI-Powered Coloring Books: A Comprehensive Guide
Designing coloring books is a rewarding pursuit, combining artistic expression with calming experiences for users. Yet, the process can be labor-intensive. Thankfully, AI tools simplify the creation o
Qodo Partners with Google Cloud to Offer Free AI Code Review Tools for Developers
Qodo, an Israel-based AI coding startup focused on code quality, has launched a partnership with Google Cloud to enhance AI-generated software integrity.As businesses increasingly depend on AI for cod
DeepMind's AI Secures Gold at 2025 Math Olympiad
DeepMind's AI has achieved a stunning leap in mathematical reasoning, clinching a gold medal at the 2025 International Mathematical Olympiad (IMO), just a year after earning silver in 2024. This break
Comments (15)
0/200
JasonJohnson
August 13, 2025 at 1:00:59 PM EDT
Super interesting read! BYOAI sounds cool but kinda scary for execs—how do you even set rules for AI chaos? 😅
0
GregoryAllen
August 5, 2025 at 7:00:59 PM EDT
This article really highlights how tricky AI adoption can be for execs! It’s like trying to tame a wild beast—super powerful but can bite if you’re not careful. 😅 Curious how companies will balance innovation with those ethical guardrails!
0
EdwardMartinez
July 29, 2025 at 8:25:16 AM EDT
This article really opened my eyes to how tricky AI adoption can be for businesses! I wonder how execs will balance innovation with all the risks—feels like walking a tightrope. 😬
0
KevinAnderson
July 29, 2025 at 8:25:16 AM EDT
This article really highlights the AI race in businesses! It's exciting but kinda scary how fast things are moving. Do execs even have time to set proper rules? 🤔
0
JeffreyMartinez
July 23, 2025 at 12:59:29 AM EDT
This article really opened my eyes to how tricky AI adoption can be for businesses! 😮 Executives need to step up and set clear rules, or it’s gonna be chaos. Anyone else think companies are rushing into AI too fast?
0
BillyWilson
April 25, 2025 at 6:37:51 PM EDT
Bring Your Own AI는 멋지지만 리더들에게는 골치 아픈 일이야. 모두가 AI를 올바르게 사용하도록 규칙을 정해야 하니까. 마치 기술의 베이비시터 같아! 😂 AI 나니를 고용하는 게 좋겠어!
0
Across all industries, businesses are eager to harness the power of artificial intelligence (AI) to stay ahead of the competition. It's up to senior executives to establish the necessary guidelines and safeguards to ensure the responsible and effective use of these new technologies.
Keith Woolley, the chief digital and information officer at the University of Bristol, is at the forefront of integrating AI across one of the UK's premier academic institutions. Bristol is not only a hub for cutting-edge technology, hosting Isambard-AI, the UK's fastest supercomputer, but it's also pushing the boundaries of AI application throughout its organization.
Day-to-Day Professionals Use AI
While Bristol is keen on leveraging AI to spur innovation across its academic endeavors, Woolley shared with ZDNET how everyday staff in teaching, administration, and research are also tapping into these emerging technologies.
Just as cloud services were once adopted, Woolley pointed out that professionals are now making their own tech choices in what's known as Bring Your Own AI (BYOAI). "It's happening," he stated, noting that the widespread acceptance of cloud technology and the rush by providers to integrate AI into their services means these technologies can slip into organizations without the IT department's knowledge.
"I'm seeing it already, where departments are now building or bringing tools into the institution because every supplier that provides you with a SaaS system is sticking AI into it," Woolley explained.
Bring Your Own AI is a Growing Trend
BYOAI is increasingly recognized as a trend by other experts as well. Research from the MIT Center for Information Systems Research indicates that this trend emerges when employees use unapproved, public generative AI tools for their work.
Woolley expressed concern over this stealthy introduction of AI, whether by users or vendors, which poses significant challenges for his team and the university's leadership. "Bring your own AI is a challenge," he noted. "It's like when you used to see storage appearing on the network from Dropbox and other cloud providers. People thought they could get a credit card and start sharing things, which isn't great."
MIT's research validates Woolley's concerns, highlighting that while generative AI tools promise productivity boosts, they also introduce risks such as data loss, intellectual property leakage, copyright violations, and security breaches.
Woolley emphasized Bristol's primary concern: the potential loss of control over how AI-enabled SaaS services handle and share data. "The system could be taking our data, which we think is in a secure SaaS environment, and running this information in a public AI model," he said.
Banning Gen AI to Mitigate Risks
So, how can organizations address the rise of BYOAI? One option is for executives to ban generative AI altogether. However, MIT's research advises business leaders to remain open to generative AI and provide clear guidance to transform BYOAI into a source of innovation.
Woolley agrees, advocating for strict control over application boundaries as the best way to manage BYOAI. "The enforcement of policies is a discussion we're having inside our organization. We're getting guardrails out to people for what they can and can't do," he said.
The university is starting with an approved set of tools to curb the uncontrolled spread of AI.
Students Want to Use AI
To set the context for these guidelines, Bristol's senior executives engaged with students to understand their perspective on using generative AI in education. "The conversation went from how you would use AI for learning to enrolments, marking, and everything else," Woolley recounted.
Surprisingly, students strongly advocated for the use of AI. "What was surprising was how much students wanted us to use AI. One of the things that came out clearly from our students was that if we don't allow them to use AI, they will be disadvantaged in the marketplace against others that offer the opportunity," Woolley shared.
He compared the introduction of generative AI to the early days of calculators in classrooms. Initially, there were fears that calculators might encourage cheating, but now they're a staple in math education. Woolley predicts a similar trajectory for generative AI.
"We're going to have to rethink our curriculum and the capability to learn using that technology," he said. "We'll have to teach the next generation of students to differentiate information provided through AI. Once we can get that piece cracked, we'll be fine."
Bristol aims to integrate generative AI into education carefully, as Woolley noted, "We've been clear that AI is about assisting the workforce, the students, and our researchers, and where practical and possible, automating services."
Key Considerations
However, recognizing AI's potential is just the beginning. Woolley described the costs associated with emerging technology as escalating rapidly, much like a hockey stick. Without strict rules and regulations, these costs can skyrocket if users are allowed to bring their own AI tools.
The university's senior executives are focused on several key considerations. "The first question is, 'How much failure do we want?' Because AI is a guessing engine at the moment, and it's one of those situations where it will make assumptions based upon the information it's got. If that information is slightly flawed, you'll get a slightly flawed answer," Woolley explained.
"So, we are looking at what services we can offer. We've put policy and process around it, but that's a living document because everything's changing so fast. We are trying to drive change through carefully," he added.
In the long term, Woolley sees three potential approaches for Bristol: consuming generative AI as part of the education system, feeding data into existing models, or developing language models as a competitive differentiator. "That's the debate we're having," he said. "Then, once the right approach is chosen, I can create policies based on how we use AI."
This approach aligns with the views of Roger Joys, vice president of enterprise cloud platforms at Alaskan telecoms firm GCI. Like MIT and Woolley, Joys emphasized the importance of policy and process in safely introducing generative AI. "I would like to see our data scientists have a curated list of models that have been reviewed and approved," he said, addressing the rise of BYOAI. "Then you can say, 'You can select from these models,' rather than them just going out and using whatever they like or find."
Joys advised executives to cut through the hype and establish an acceptable use policy that helps people navigate challenges. "Find the business cases," he said. "Move methodically, not necessarily slowly, but toward a known target, and let's show the value of AI."



Super interesting read! BYOAI sounds cool but kinda scary for execs—how do you even set rules for AI chaos? 😅




This article really highlights how tricky AI adoption can be for execs! It’s like trying to tame a wild beast—super powerful but can bite if you’re not careful. 😅 Curious how companies will balance innovation with those ethical guardrails!




This article really opened my eyes to how tricky AI adoption can be for businesses! I wonder how execs will balance innovation with all the risks—feels like walking a tightrope. 😬




This article really highlights the AI race in businesses! It's exciting but kinda scary how fast things are moving. Do execs even have time to set proper rules? 🤔




This article really opened my eyes to how tricky AI adoption can be for businesses! 😮 Executives need to step up and set clear rules, or it’s gonna be chaos. Anyone else think companies are rushing into AI too fast?




Bring Your Own AI는 멋지지만 리더들에게는 골치 아픈 일이야. 모두가 AI를 올바르게 사용하도록 규칙을 정해야 하니까. 마치 기술의 베이비시터 같아! 😂 AI 나니를 고용하는 게 좋겠어!












