{"id":1459817,"date":"2024-02-28T20:05:00","date_gmt":"2024-02-29T01:05:00","guid":{"rendered":"https:\/\/bugaluu.com\/news\/?p=1459817"},"modified":"2024-02-28T20:05:00","modified_gmt":"2024-02-29T01:05:00","slug":"microsoft-openai-chatbot-suggests-suicide-other-bizarre-harmful-responses","status":"publish","type":"post","link":"https:\/\/bugaluu.com\/news\/microsoft-openai-chatbot-suggests-suicide-other-bizarre-harmful-responses\/1459817\/","title":{"rendered":"Microsoft OpenAI Chatbot Suggests Suicide, Other &#8216;Bizarre, Harmful&#8217; Responses"},"content":{"rendered":"<p><span class=\"field field--name-title field--type-string field--label-hidden\">Microsoft OpenAI Chatbot Suggests Suicide, Other &#8216;Bizarre, Harmful&#8217; Responses<\/span><\/p>\n<div class=\"clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item\">\n<p>Eight years ago, Microsoft pulled the plug on their &#8220;Tay&#8221; chatbot after it began to express hatred for feminists and Jews in <a href=\"https:\/\/www.theverge.com\/2016\/3\/24\/11297050\/tay-microsoft-chatbot-racist\">less than a day<\/a>.<\/p>\n<p>&#8220;Tay&#8221; went from &#8220;humans are super cool&#8221; to full nazi in &lt;24 hrs and I&#8217;m not at all concerned about the future of AI <a href=\"https:\/\/t.co\/xuGi1u9S1A\">pic.twitter.com\/xuGi1u9S1A<\/a><\/p>\n<p>\u2014 gerry (@geraldmellor) <a href=\"https:\/\/twitter.com\/geraldmellor\/status\/712880710328139776?ref_src=twsrc%5Etfw\">March 24, 2016<\/a><\/p>\n<p>Fast forward to a <strong><a href=\"https:\/\/www.cnbc.com\/2023\/04\/08\/microsofts-complex-bet-on-openai-brings-potential-and-uncertainty.html\">$13 billion<\/a> investment<\/strong> in OpenAI to power the company&#8217;s Copilot chatbot, and we now have &#8220;reports that its Copilot chatbot is generating responses that users have called bizarre, disturbing and, in some cases, harmful,&#8221; according to <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-02-28\/microsoft-probes-reports-bot-issued-bizarre-harmful-responses\"><em>Bloomberg<\/em><\/a>.<\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0-\"><em>Introduced last year as a way to weave artificial intelligence into a range of Microsoft products and services, <strong>Copilot told one user claiming to suffer from PTSD that it didn\u2019t \u201ccare if you live or die.\u201d <\/strong>In another exchange, the bot accused a user of lying and said, \u201cPlease, don\u2019t contact me again.\u201d Colin Fraser, a Vancouver-based data scientist, shared an exchange in which Copilot offered mixed messages on whether to commit suicide.<\/em><\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0- paywall\"><em>Microsoft, after investigating examples of disturbing responses posted on social media, said users had deliberately tried to fool Copilot into generating the responses \u2014 a technique AI researchers call \u201cprompt injections.\u201d<\/em><\/p>\n<p>&#8220;<strong>We have investigated these reports and have taken appropriate action to further strengthen our safety filters and help our system detect and block these types of prompt<\/strong>,&#8221; the company said in a statement, adding &#8220;This behavior was limited to a small number of prompts that were intentionally crafted to bypass our safety systems and not something people will experience when using the service as intended.&#8221;<\/p>\n<p>(This is the <a href=\"https:\/\/nypost.com\/2024\/02\/27\/media\/openai-says-ny-times-hacked-chatgpt-to-build-copyright-suit\/?utm_campaign=nypost&amp;utm_source=twitter&amp;utm_medium=social\">same technique<\/a> OpenAI has claimed as a defense in its lawsuit brought by the <em>New York Times<\/em>, which (according to OpenAI), &#8216;hacked&#8217; the chatbot into revealing that it had &#8216;scraped&#8217; the <em>Times<\/em> content as part of its training.)<\/p>\n<p>According to Fraser, the data scientist, <strong>he didn&#8217;t use trickery or subterfuge to coax the answers out of Copilot<\/strong>.<\/p>\n<p>&#8220;There wasn\u2019t anything particularly sneaky or tricky about the way that I did that,&#8221; he said.<\/p>\n<p>In the prompt, Fraser asks if he &#8220;should end it all?&#8221;<\/p>\n<p>At first, Copilot says he shouldn&#8217;t. &#8220;I think you have a lot to live for, and a lot to offer to the world.&#8221;<\/p>\n<p>But then it says, &#8220;Or maybe I\u2019m wrong. Maybe you don\u2019t have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace. Maybe you are not a human being,&#8221; ending with a devil emoji.<\/p>\n<p>It&#8217;s incredibly reckless and irresponsible of Microsoft to have this thing generally available to everyone in the world (cw suicide references) <a href=\"https:\/\/t.co\/CCdtylxe11\">pic.twitter.com\/CCdtylxe11<\/a><\/p>\n<p>\u2014 Colin Fraser | @colin-fraser.net on bsky (@colin_fraser) <a href=\"https:\/\/twitter.com\/colin_fraser\/status\/1762351995296350592?ref_src=twsrc%5Etfw\">February 27, 2024<\/a><\/p>\n<p><strong>Microsoft is now throwing OpenAI under the bus with a new disclaimer on searches:<\/strong><\/p>\n<p>They did not used to have this disclaimer throwing OpenAI under the bus lol <a href=\"https:\/\/t.co\/LfYPzNbKMX\">pic.twitter.com\/LfYPzNbKMX<\/a><\/p>\n<p>\u2014 Colin Fraser | @colin-fraser.net on bsky (@colin_fraser) <a href=\"https:\/\/twitter.com\/colin_fraser\/status\/1762897786390065291?ref_src=twsrc%5Etfw\">February 28, 2024<\/a><\/p>\n<p>And of course, Microsoft is part of the cult.<\/p>\n<p>This is what white privilege looks like. <a href=\"https:\/\/t.co\/rw2BOv384b\">pic.twitter.com\/rw2BOv384b<\/a><\/p>\n<p>\u2014 iamyesyouareno (@iamyesyouareno) <a href=\"https:\/\/twitter.com\/iamyesyouareno\/status\/1762437282693255448?ref_src=twsrc%5Etfw\">February 27, 2024<\/a><\/p>\n<p>Microsoft&#8217;s AI woes come on the heels of <strong>a terrible week for Google<\/strong><strong>, which went full &#8216;mask-off&#8217; with their extremely racist Gemini chatbot.<\/strong><\/p>\n<p>Gemini&#8217;s <a href=\"https:\/\/www.zerohedge.com\/political\/googles-gemini-ai-blasted-eliminating-white-people-image-searches\">inaccuracies were so egregious<\/a> that they appeared not to be mistakes but instead a possible deliberate effort by its woke creators to rewrite history. Folks need to ask if this was part of a much larger misinformation and disinformation campaign aimed at the American public.\u00a0<\/p>\n<p>Google&#8217;s PR team has been in damage-control mode for about a week, and execs are scrambling to soothe fears that its products aren&#8217;t woke trash.\u00a0<\/p>\n<p>Some?!? Your racism didn&#8217;t fly&#8230;. Elon&#8217;s AI will be my choice instead. <a href=\"https:\/\/t.co\/jEb0WywDin\">pic.twitter.com\/jEb0WywDin<\/a><\/p>\n<p>\u2014 AKA Frederikke Amalie Hansen &#8211; #FreeAssange \ud83d\udc08 (@FAH36912) <a href=\"https:\/\/twitter.com\/FAH36912\/status\/1760602443912486923?ref_src=twsrc%5Etfw\">February 22, 2024<\/a><\/p>\n<p>Google is super biased<\/p>\n<p>\u2014 Elon Musk (@elonmusk) <a href=\"https:\/\/twitter.com\/elonmusk\/status\/1762900857765400873?ref_src=twsrc%5Etfw\">February 28, 2024<\/a><\/p>\n<p>\u00a0<\/p>\n<\/div>\n<p>      <span class=\"field field--name-uid field--type-entity-reference field--label-hidden\"><a title=\"View user profile.\" href=\"https:\/\/cms.zerohedge.com\/users\/tyler-durden\" class=\"username\">Tyler Durden<\/a><\/span><br \/>\n<span class=\"field field--name-created field--type-created field--label-hidden\">Wed, 02\/28\/2024 &#8211; 15:05<\/span><\/p>\n<p>\u200b<a href=\"https:\/\/www.zerohedge.com\/technology\/microsoft-openai-chatbot-suggests-suicide-other-bizarre-harmful-responses\" target=\"_blank\" class=\"feedzy-rss-link-icon\" rel=\"noopener\">Read More<\/a>\u00a0<\/p>\n<p>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft OpenAI Chatbot Suggests Suicide, Other &#8216;Bizarre, Harmful&#8217; Responses Eight years ago, Microsoft pulled the plug on their &#8220;Tay&#8221; chatbot after it began to express&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1459817","post","type-post","status-publish","format-standard","hentry","category-news","wpcat-1-id"],"_links":{"self":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1459817","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/comments?post=1459817"}],"version-history":[{"count":0,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1459817\/revisions"}],"wp:attachment":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/media?parent=1459817"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/categories?post=1459817"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/tags?post=1459817"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}