News and Events

New course explains the importance of artificial intelligence safety at Methodist

Published: July 1, 2024
Compliance

Every day, it seems like artificial intelligence – AI for short – is popping up in more places. AI is when a computer system mimics human intelligence to complete a task. AI includes things such as machine learning algorithms, natural language processors, robotics and other tools to process images or data. 

There are now AI systems that can read X-rays to look for tumors, draw a lifelike picture or write an article for you to post on your company’s website. As remarkable as all of these things are, we haven’t reached the level of science fiction AI, but we’re getting closer.

AI tools can perform impressive, important tasks. But they also need to be used wisely. It’s important that you’re informed about AI’s capabilities and risks so you use it appropriately and responsibly. Among the resources available is a voluntary artificial intelligence course available on Workday.

 

AI’s issues with bias and transparency

AI systems aren’t without their drawbacks. The two most critical risks posed by AI in health care settings are its tendency for bias and its lack of transparency. AI systems have to be trained on existing information before they can function. The creator of an AI system has to tune it by feeding it large data sets for analysis, looking at the results it spits out and then modifying the algorithm until it produces the desired results.

Through this process, bias may be ingrained in the AI as a reflection of bias from either the training content or the designers of the AI system. The designers may not even be aware of these biases until the AI system expresses it, and the system may not express it until it’s too late to make changes. There are multiple examples of AI systems that perpetuate racism and sexism or deliver dark responses.

Transparency issues can arise because AI creators often don’t release the specifics of the content they used to train the AI. The creators may not even be aware of the full content of the training material because the data set is so large or because it’s an expanding pool of content from other users.

Additionally, AI systems generally can’t provide true insight into how they generate their results. And, because AI algorithms are usually proprietary programs – and incredibly complex – the algorithm itself is also inaccessible. This means a typical result or response from an AI system is based on an unknown algorithm processing unknown data of unknown credibility without the ability to properly explain its reasoning or cite its sources.

 

Tips for responsible AI use

Should we just stop using AI? That would eliminate the problems we’ve encountered, but it would also eliminate all of the good that can come with appropriate AI use. It’s also easier said than done. As AI has expanded, it’s been integrated into just about every type of software and device.

The far better option is to be informed about AI. We’re each responsible for how we use the tools at our disposal – and that includes AI. So here’s what you can do:

  • Be wary of where you enter private information. AI is not on the hook for releasing that information. You are.
  • Double-check any information that an AI provides for you. It doesn’t know what truth is. You do.
  • Don’t rely on AI to make judgements for you. AI can’t determine what’s moral or right.  You can.

If you want to learn more, check out our AI training on Workday or contact Joe Tweedt in the Compliance Department.