Hey all you business buffs and Coder Radio rockstars! Today we’re going to dive into something that might keep you on the edge of your Aeron chair — the regulatory concerns with AI and LLMS (Large Language Models), with a focus on one you might already be familiar with – ChatGPT.
Why Should You Care?
You might wonder why AI regulation matters for your business. It’s not just about 1s and 0s — it’s about ensuring ethical practices, securing data, and maintaining compliance with the rapidly evolving laws that govern this new frontier. Ignoring them might land you in hot water, and we certainly don’t want that.
Quick Refresher on Terms
AI and LLMS are powerful tools that can help you in decision-making, improving processes, or even customer interaction. Ever heard of ChatGPT? Yeah, it’s one of those smart models that can talk to you just like a human.
But these systems have raised some eyebrows with lawmakers, and they’re tightening the noose around the use of these technologies.
Privacy
These AI models can handle a lot of personal information. The misuse of data or even accidental leakage can lead to serious privacy issues — the intended use on Monday might not be the use on Wednesday.
Bias
Believe it or not, even AI can be biased. If not built and trained properly, these models might make biased decisions. Sadly, this is a systemic problem based in fields as important as law enforcement — there are multiple cases of algorithms recommending harsher sentences on minority US populations than they would on the equivalent defendant or inmate of the white population. This is horrible and is one of the primary reason that implicit bias in AI / LLM development needs to be kept front and center in mind. There are also famous cases (such as a few with the Goldman Sachs / Apple Card) where a wife would be offered less credit than her husband who hand a lower credit score and overall worse financial profile. To be clear, the robots are not trying to be EVIL, rather they were born and fed on data that is.
The HAXX!
Protecting the AI system itself from unauthorized access is a big deal. You don’t want someone tampering with your AI model. If you’ve ever seen the recent Tom Cruise Mission Impossible movie or that one Daniel Craig James Bond one, then you know why hacked AI is just real bad.
IP / COPYRIGHT
Who owns the AI-generated content? This question is giving legal experts a real headache. Once again, I am glad I skipped law-school and choose to hack on iOS ;).
Tips to Avoid Regulatory Hurdles / I AM NOT A LAWYER / NOT LEGAL ADVICE
Fear not, dear reader! Here are some straightforward tips to keep you on the right track.
1. Know Your Jurisdiction’s Regulations: Different countries and states have different rules. Stay up to date!
2. Build Transparent Models: If your AI, like ChatGPT, is making decisions, make sure it’s clear how those decisions are being made.
3. Implement Robust Security Measures: Keep the bad guys out by maintaining strong security protocols.
4. Address Bias Head-On: Be proactive in identifying and correcting bias in your models. Make sure that your team reviewing test outputs from your LLM is diverse on all factors. Most importantly, take a look at that initial test data you feed your LLM; this goes double if you buy the data from a data broker, since those folks tend not to vet their stuff super well….
5. Consult Legal Experts: It never hurts to get a professional opinion to ensure you’re in full compliance. With that said… Shakespeare had a point regarding the legal eagles….
6. Be Like Google, Back Before The Empire: Don’t be evil….
The world of AI and LLMS is like the digital Wild West right now. Laws are changing, technology is advancing, but with some careful navigation, you can stay ahead of the game and hopefully avoid the wrath of the sheriff.
Stay curious, stay compliant, and as always, feel free to drop me a line if you have any questions and do check out Alice for you AI-powered business process automation needs! Until next time, take care!
Best,
Michael