I am hearing LOTS about this AI thing – part 2
By Malcolm and John Harding of Compu-Home
In our last column we took a brief look at Artificial Intelligence (AI) and some of its enhancements and benefits for our daily lives. This time we will think about the reasons for being cautious and discriminating in the ways that we allow the influence of AI to expand and sometimes even take over aspects of our lives in unexpected ways that are not always positive. We can only scratch the surface here, but this is a topic that is not going away and there will be lots of opportunity to dig deeper.
Automation has been a feature of manufacturing in North America since the 1930s and even earlier, with the result that human workers lost jobs to machines, or at least had to learn new skills to keep bread on the table. AI, the modern descendant of the assembly line, has now infiltrated even white-collar positions in ways we could not have imagined a decade ago. It is important that we monitor the details and the results of this transition, to ensure that we keep control over this evolution in the workplace and beyond. The old expression about the tail not wagging the dog comes to mind. AI should be serving the needs of society and not the other way around.
AI should be a huge warning flag in areas of privacy and security, with its ability to accumulate the bits and pieces of so many aspects of our modern personal, professional, social and financial lives. We allow this by sharing various little details here and there, never expecting that they could be gathered to create an unexpected sum of the parts. The result can be that others, friends, enemies, associates, governments, financial institutions, employers, or even media, sometimes know more about us than we imagined could be possible. While we try to avoid becoming paranoid, we must be aware of the ways that our privacy and security can be compromised and remember that we now have to be more careful than we were in the past. We can do this by learning to be vigilant about what personal information we divulge online and staying mindful of how it might be used.
We must remember that for all of its information-gathering power, AI is not infallible. Most users will have noticed that when they use search services like Google and Bing nowadays, the first result is often an AI response on the subject. These results are sometimes very efficient and helpful, but they are frequently superficial and contain errors, and so at the very least we must scroll through the results for clarification or further information. One weakness of the AI search results is that there is overuse of YouTube videos as references which are very often poorly done and inaccurate. Students and others who have become over-reliant on ChatGPT and other chatbots to do their writing for them have sometimes learned this lesson to their chagrin.
Knowing that information from AI sources can often be suspect, we must be especially careful when we share it. Fact-checking has to be the norm, with special attention when the content seems sensational or sensitive. MIS-information (simply mistaken) and DIS-information (deliberately misleading) must not be perpetuated simply because it is easy to click SHARE.
It would seem on the surface that the management and ethical use of AI is a matter for government and the industries themselves, but there are certainly roles for individual users. We can look for politicians with policies that focus on online accountability. We can enter into discussions (like this one) to emphasize that AI must remain a tool that we can control. We can use all of the privacy settings that are available on the browsers and online apps that we use. Finally, and with some research, we can find and patronize companies and services that are trying to maintain a balance between the convenience of AI and the dangers.
