{"id":93443,"date":"2023-06-20T10:30:00","date_gmt":"2023-06-20T10:30:00","guid":{"rendered":"https:\/\/cloudnewshub.com\/?p=93443"},"modified":"2023-06-20T10:30:00","modified_gmt":"2023-06-20T10:30:00","slug":"chatgpt-is-creating-a-legal-and-compliance-headache-for-business","status":"publish","type":"post","link":"https:\/\/cloudnewshub.com\/?p=93443","title":{"rendered":"ChatGPT is creating a legal and compliance headache for business"},"content":{"rendered":"<div><img decoding=\"async\" src=\"http:\/\/cloudnewshub.com\/wp-content\/uploads\/2023\/06\/chatgpt-is-creating-a-legal-and-compliance-headache-for-business.jpg\" class=\"ff-og-image-inserted\"><\/div>\n<p>Over the past few months, <a href=\"https:\/\/www.techtarget.com\/whatis\/definition\/ChatGPT\">ChatGPT<\/a> has taken the professional world by storm. Its ability to answer almost any question and generate content has led people to use the artificial intelligence-powered chatbot for completing administrative tasks, writing long-form content like letters and essays, creating resumes, and much more.<\/p>\n<p>According to <a href=\"https:\/\/www.kornferry.com\/insights\/this-week-in-leadership\/chatgpt-invades-the-workplace\">research<\/a> from Korn Ferry, 46% of professionals are using ChatGPT for finishing tasks in the workplace. Another <a href=\"https:\/\/www.hrgrapevine.com\/content\/article\/2023-02-21-half-of-workers-think-chatgpt-will-help-them-perform-better\">survey<\/a> found that 45% of employees see ChatGPT as a means of achieving better results in their roles.&nbsp;<\/p>\n<p>But there seems to be <a href=\"https:\/\/www.computerweekly.com\/news\/365532535\/NCSC-warns-over-AI-language-models-but-rejects-cyber-alarmism\">a darker side to artificial intelligence (AI) software<\/a> that is being overlooked by employees. Many employers fear their staff sharing sensitive corporate information with AI chatbots like ChatGPT, which could end up in the hands of cyber criminals. And there\u2019s also a question about copyright when employees use ChatGPT for automatically generating content.<\/p>\n<p>AI tools can even be <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/feb\/08\/biased-ai-algorithms-racy-women-bodies\">biased and discriminatory<\/a>, potentially causing huge problems for companies relying on them for screening potential employees or answering questions from customers. These issues have led many experts to question the security and legal implications of ChatGPT\u2019s usage in the workplace.<\/p>\n<section class=\"section main-article-chapter\" data-menu-title=\"Increased data security risks\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>Increased data security risks&nbsp;<\/h3>\n<p>The increased use of generative AI tools in the workplace makes businesses highly vulnerable to serious data leaks, according to Neil Thacker, chief information security officer (CISO) for EMEA and Latin America at <a href=\"https:\/\/www.netskope.com\/\">Netskope<\/a>.<\/p>\n<p>He points out that OpenAI, the creator of ChatGPT, uses data and queries stored on its servers for training its models. And should cyber criminals breach OpenAI\u2019s systems, they could gain access to \u201cconfidential and sensitive data\u201d that would be \u201cdamaging\u201d for businesses.&nbsp;<\/p>\n<p>OpenAI has since implemented &#8220;opt-out&#8221; and &#8220;disable history&#8221; options in a bid to improve data privacy, but Thacker says users will still need to manually select these.&nbsp;<\/p>\n<p>While laws like the UK\u2019s <a href=\"https:\/\/www.computerweekly.com\/news\/365535452\/UK-presses-on-with-post-Brexit-data-protection-reform\">Data Protection and Digital Information Bill<\/a> and the <a href=\"https:\/\/www.computerweekly.com\/news\/366541741\/EU-AI-act-passes-a-significant-milestone\">European Union&#8217;s proposed AI Act<\/a> are a step in the right direction regarding the regulation of software like ChatGPT, Thacker says there are \u201ccurrently few assurances about the way companies whose products use generative AI will process and store data\u201d.<\/p>\n<\/section>\n<section class=\"section main-article-chapter\" data-menu-title=\"Banning AI isn\u2019t the solution\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>Banning AI isn\u2019t the solution&nbsp;<\/h3>\n<p>Employers concerned about the security and compliance risks of AI services may decide to ban their use in the workplace. But Thacker warns this could backfire.&nbsp;<\/p>\n<p>\u201cBanning AI services from the workplace will not alleviate the problem as it would likely cause \u2018shadow AI\u2019 \u2013 the unapproved use of third-party AI services outside of company control,\u201d he says.&nbsp;<\/p>\n<blockquote class=\"main-article-pullquote\">\n<p><figure> AI is more valuable when combined with human intelligence <\/figure><figcaption> <strong>Ingrid Verschuren, Dow Jones<\/strong> <\/figcaption><i class=\"icon\" data-icon=\"z\"><\/i> <\/p>\n<\/blockquote>\n<p>Ultimately, it is the responsibility of security leaders to ensure that employees use AI tools safely and responsibly. To do this, they need to \u201cknow where sensitive information is being stored once fed into third-party systems, who is able to access that data, how they will use it, and how long it will be retained\u201d.<\/p>\n<p>Thacker adds: \u201cCompanies should realise that employees will be embracing generative AI integration services from trusted enterprise platforms such as Teams, Slack, Zoom and so on. Similarly, employees should be made aware that the default settings when accessing these services could lead to sensitive data being shared with a third-party.\u201d<\/p>\n<\/section>\n<section class=\"section main-article-chapter\" data-menu-title=\"Using AI tools safely in the workplace\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>Using AI tools safely in the workplace&nbsp;<\/h3>\n<p>Individuals who use ChatGPT and other AI tools at work could unknowingly commit copyright infringement, meaning their employer may be subjected to costly lawsuits and fines.&nbsp;<\/p>\n<p>Barry Stanton, partner and head of the employment and immigration team at law firm <a href=\"https:\/\/www.boyesturner.com\/\">Boyes Turner<\/a>, explains: \u201cBecause ChatGPT generates documents produced from information already stored and held on the internet, some of the material it uses may inevitably be subject to copyright.&nbsp;&nbsp;<\/p>\n<p>\u201cThe challenge \u2013 and risk \u2013 for businesses is that they may not know when employees have infringed another\u2019s copyright, because they can\u2019t check the information source.\u201d&nbsp;<\/p>\n<p>For businesses looking to experiment with AI in a safe and ethical manner, it\u2019s paramount that security and HR teams create and implement \u201cvery clear policies specifying when, how and in what circumstances it can be used\u201d.<\/p>\n<p>Stanton says businesses could decide only to use AI \u201csolely for internal purposes\u201d or \u201cin limited external circumstances\u201d. He adds: \u201cWhen the business has outlined these permissions, the IT security team needs to ensure that it then, so far as technically possible, locks down any other use of ChatGPT.\u201d<\/p>\n<\/section>\n<section class=\"section main-article-chapter\" data-menu-title=\"The rise of copycat chatbots\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>The rise of copycat chatbots&nbsp;<\/h3>\n<p>With the hype surrounding ChatGPT and generative AI continuing to grow, cyber criminals are taking advantage of this by creating copycat chatbots designed to steal data from unsuspecting users.<\/p>\n<p>Alex Hinchliffe, threat intelligence analyst at <a href=\"https:\/\/www.paloaltonetworks.com\/unit42\">Unit 42, Palo Alto Networks<\/a>, says: \u201cSome of these copycat chatbot applications use their own large language models, while many claim to use the Chat GPT public API. However, these copycat chatbots tend to be pale imitations of ChatGPT or simply malicious fronts to gather sensitive or confidential data.&nbsp;<\/p>\n<p>\u201cThe risk of serious incidents linked to these copycat apps is increased when staff start experimenting with these programs on company data. It is also likely that some of these copycat chatbots are manipulated to give wrong answers or promote misleading information.\u201d<\/p>\n<p>To stay one step ahead of spoofed AI applications, Hinchliffe says users should avoid opening ChatGPT-related emails or links that appear to be suspicious and always access ChatGPT via OpenAI\u2019s official website.&nbsp;<\/p>\n<p>CISOs can also mitigate the risk imposed by fake AI services by only allowing employees to access apps via legitimate websites, Hinchliffe recommends. They should also educate employees on the implications of sharing confidential information with AI chatbots.&nbsp;<\/p>\n<p>Hinchliffe says CISOs particularly concerned about the data privacy implications of ChatGPT should consider implementing software such as a <a href=\"https:\/\/www.computerweekly.com\/blog\/Networks-Generation\/The-Importance-of-CASB-And-Its-Limitations\">cloud access service broker (CASB)<\/a>.<\/p>\n<p>\u201cThe key capabilities are having comprehensive app usage visibility for complete monitoring of all software as a service (SaaS) usage activity, including employee use of new and emerging generative AI apps that can put data at risk,\u201d he adds.<\/p>\n<p>\u201cGranular SaaS application controls mean allowing employee access to business-critical applications, while limiting or blocking access to high-risk apps like generative AI. And finally, consider advanced data security that uses machine learning to classify data and detect and stop company secrets being leaked to generative AI apps inadvertently.\u201d<\/p>\n<\/section>\n<section class=\"section main-article-chapter\" data-menu-title=\"Data reliability implications\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>Data reliability implications&nbsp;<\/h3>\n<p>In addition to cyber security and copyright implications, another major flaw of ChatGPT is the reliability of the data powering its algorithms. Ingrid Verschuren, head of data strategy at <a href=\"https:\/\/www.dowjones.com\/\">Dow Jones<\/a>, warns that even \u201cminor flaws will make outputs unreliable\u201d.<\/p>\n<p>She tells Computer Weekly:&nbsp; \u201cAs professionals look to leverage AI and chatbots in the workplace, we are hearing growing concerns around auditability and compliance. The application and implementation of these emerging technologies therefore requires careful consideration \u2013 particularly when it comes to the source and quality of the data used to train and feed the models.\u201d<\/p>\n<p>Generative AI applications scrape data from across the internet and use this information to answer questions from users. But given that not every piece of internet-based content is accurate, there\u2019s a risk of apps like ChatGPT spreading misinformation.&nbsp;<\/p>\n<p>Verschuren believes the creators of generative AI software should ensure data is only mined from \u201creputable, licensed and regularly updated sources\u201d to tackle misinformation. \u201cThis is why human expertise is so crucial \u2013 AI alone cannot determine which sources to use and how to access them,\u201d she adds.<\/p>\n<p>\u201cOur philosophy at Dow Jones is that AI is more valuable when combined with human intelligence. We call this collaboration between machines and humans &#8216;authentic intelligence&#8217;, which combines the automation potential of the technology with the wider decisive context that only a subject matter expert can bring.\u201d<\/p>\n<\/section>\n<section class=\"section main-article-chapter\" data-menu-title=\"Using ChatGPT responsibly\">\n<h3 class=\"section-title\"><i class=\"icon\" data-icon=\"1\"><\/i>Using ChatGPT responsibly&nbsp;<\/h3>\n<p>Businesses allowing their staff to use ChatGPT and generative AI in the workplace open themselves up to \u201csignificant legal, compliance, and security considerations\u201d, according to Craig Jones, vice president of security operations at <a href=\"https:\/\/www.ontinue.com\/company\/\">Ontinue<\/a>.<\/p>\n<p>However, he says there are a range of steps that firms can take to ensure their employees use this technology responsibly and securely. The first is taking into account data protection regulations.&nbsp;<\/p>\n<p>\u201cOrganisations need to comply with <a href=\"https:\/\/www.computerweekly.com\/opinion\/Could-your-employees-use-of-ChatGPT-put-you-in-breach-of-GDPR\">regulations such as GDPR or CCPA<\/a>. They should implement robust data handling practices, including obtaining user consent, minimising data collection, and encrypting sensitive information, \u201c he says. \u201cFor example, a healthcare organisation utilising ChatGPT must handle patient data in compliance with the Data Protection Act to protect patient privacy.\u201d<\/p>\n<p>Second, Jones urges businesses to consider intellectual property rights when it comes to using ChatGPT. This is due to the fact that ChatGPT is essentially a content generation tool. He recommends that firms \u201cestablish clear guidelines regarding ownership and usage rights\u201d for <a href=\"https:\/\/www.computerweekly.com\/feature\/Generative-AI-at-watershed-moment-with-spate-of-legal-challenges\">proprietary and copyrighted data<\/a>.&nbsp;<\/p>\n<p>\u201cBy defining ownership, organisations can prevent disputes and unauthorised use of intellectual property. For instance, a media company using ChatGPT needs to establish ownership of articles or creative works produced by the AI &#8211; this is very much open to interpretation as is,\u201d he says.&nbsp;<\/p>\n<p>\u201cIn the context of legal proceedings, organisations may be required to produce ChatGPT-generated content for e-discovery or legal hold purposes. Implementing policies and procedures for data preservation and legal holds is crucial to meet legal obligations. Organisations must ensure that the generated content is discoverable and retained appropriately. For example, a company involved in a lawsuit should have processes in place to retain and produce ChatGPT conversations as part of the&nbsp; e-discovery process.\u201d<\/p>\n<p>Something else to consider is the fact that AI tools often exhibit <a href=\"https:\/\/www.techtarget.com\/searchcio\/news\/365535486\/Federal-agencies-promise-action-against-AI-driven-harm\">signs of bias and discrimination<\/a>, which can cause serious reputational and legal damage to businesses using this software for customer service and hiring. But Jones says there are several techniques businesses can adopt to tackle AI bias, such as holding audits regularly and monitoring the responses provided by chatbots.&nbsp;<\/p>\n<p>He adds: \u201cIn addition, organisations need to develop an approach to assessing the output of ChatGPT, ensuring that experienced humans are in the loop to determine the validity of the outputs. This becomes increasingly important if the output of a ChatGPT-based process feeds into a subsequent automated stage. In early adoption phases, we should look at ChatGPT as decision support as opposed to the decision maker.\u201d<\/p>\n<p>Despite the security and legal implications of using ChatGPT at work, AI technologies are still in their infancy and are here to stay. Jake Moore, global cyber security advisor at <a href=\"https:\/\/www.eset.com\/\">ESET<\/a>, concludes: \u201cIt must be reminded that we are still in the very early stages of chatbots. But as time goes on, they will supersede traditional search engines and become a part of life. The data generated from our Google searches can be sporadic and generic, but chatbots are already becoming more personal with the human-led conversations in order to seek out more from us.\u201d<\/p>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>Over the past few months, ChatGPT has taken the professional world by storm. Its ability to answer almost any question and generate content has led people to use the artificial intelligence-powered chatbot for completing administrative tasks, writing long-form content like letters and essays, creating resumes, and much more. According to research from Korn Ferry, 46% [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":93444,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[533],"tags":[],"class_list":["post-93443","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-it"],"_links":{"self":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/93443","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=93443"}],"version-history":[{"count":0,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/93443\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/media\/93444"}],"wp:attachment":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=93443"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=93443"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=93443"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}