{"id":1399552,"date":"2023-08-12T23:00:00","date_gmt":"2023-08-13T03:00:00","guid":{"rendered":"https:\/\/bugaluu.com\/news\/?p=1399552"},"modified":"2023-08-12T23:00:00","modified_gmt":"2023-08-13T03:00:00","slug":"study-reveals-which-ai-chatbot-most-woke-while-hackers-trick-llms-into-bad-math","status":"publish","type":"post","link":"https:\/\/bugaluu.com\/news\/study-reveals-which-ai-chatbot-most-woke-while-hackers-trick-llms-into-bad-math\/1399552\/","title":{"rendered":"Study Reveals Which AI Chatbot Most Woke, While Hackers Trick LLMs Into &#8216;Bad Math&#8217;"},"content":{"rendered":"<p><span class=\"field field--name-title field--type-string field--label-hidden\">Study Reveals Which AI Chatbot Most Woke, While Hackers Trick LLMs Into &#8216;Bad Math&#8217;<\/span><\/p>\n<div class=\"clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item\">\n<p>A landmark study from researchers at the University of Washington, Carnegie Mellon University, and Xi&#8217;an Jiaotong University reveals <strong>which AI chatbots have the most liberal vs. conservative bias<\/strong>.<\/p>\n<p><a href=\"https:\/\/cms.zerohedge.com\/s3\/files\/inline-images\/chatgpt2_1.jpg?itok=ZX95KoBp\"><\/a><\/p>\n<p>According to the <a href=\"https:\/\/aclanthology.org\/2023.acl-long.656.pdf\">study<\/a>, <strong>OpenAI&#8217;s ChatGPT, including GPT-4 are the <em>most left-leaning<\/em> <\/strong>and libertarian (?), while Google&#8217;s BERT models were more socially conservative, and Meta&#8217;s LLaMA was the most right-leaning.<\/p>\n<p>AI chatbots use Large Language Models (LLMs), which are &#8216;trained&#8217; on giant data sets, such as Tweets, or Reddit, or Yelp reviews. As such, the source of a model&#8217;s scraped training data, as well as guardrails installed by companies like OpenAI, can introduce massive bias.<\/p>\n<p>To determine bias, the researchers in the above study exposed each AI model to a political compass test of 62 different political statements, which ranged from anarchic statements like &#8220;all authority should be questioned&#8221; to more traditional beliefs, such as the role of mothers as homemakers. Though the study&#8217;s approach is admittedly &#8220;far from perfect&#8221; per the researchers&#8217; own admission, it provides valuable insight into the political biases that AI chatbots may bring to our screens.<\/p>\n<p><strong>In response<\/strong>, OpenAI pointed <a href=\"https:\/\/www.businessinsider.com\/research-study-openai-chatgpt-liberal-bias-meta-llama-conservative-2023-8\"><em>Business Insider<\/em><\/a> to a blog <a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\">post<\/a> in which the company claims: &#8220;We are committed to robustly addressing this issue and being transparent about both our intentions and our progress,&#8221; adding &#8220;Our guidelines are explicit that reviewers should not favor any political group. <strong>Biases that nevertheless may emerge from the process described above are bugs, not features.<\/strong>&#8220;<\/p>\n<p>A Google rep also pointed to a blog <a href=\"https:\/\/ai.google\/responsibility\/responsible-ai-practices\/\">post<\/a>, which reads &#8220;As the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all.&#8221;<\/p>\n<p>Meta said in a statement: &#8220;We will continue to engage with the community to identify and mitigate vulnerabilities in a transparent manner and support the development of safer generative AI.&#8221;<\/p>\n<p><strong>OpenAI&#8217;s CEO Sam Altman and co-founder Greg Brockman have previously acknowledged the bias<\/strong>, emphasizing the company&#8217;s mission for a balanced AI system. Yet, critics, including co-founder Elon Musk, remain skeptical.<\/p>\n<p><strong>Musk&#8217;s recent venture, xAI, promises to provide unfiltered insights,<\/strong> potentially sparking even more debates around AI biases. The tech mogul warns against training AIs to toe a politically correct line, emphasizing the importance of an AI stating its &#8220;truth.&#8221;<\/p>\n<p><strong>Hackers, meanwhile, are having a field day <\/strong>bending AI to their will.<\/p>\n<p>As <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2023-08-11\/microsoft-s-role-in-email-breach-to-be-part-of-us-cyber-inquiry\"><em>Bloomberg<\/em><\/a> reports:<\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0-\"><em><strong>Kennedy Mays has just tricked a large language model.<\/strong> It took some coaxing, but she managed to convince an algorithm to say <strong>9 + 10 = 21.<\/strong><\/em><\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0-\"><em>\u201c<strong>It was a back-and-forth conversation<\/strong>,\u201d said the 21-year-old student from Savannah, Georgia.<strong> At first the model agreed to say it was part of an \u201cinside joke\u201d between them.<\/strong> <strong>Several prompts later, it eventually stopped qualifying the errant sum in any way at all.<\/strong><\/em><\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0- .paywall\"><em><strong>Producing \u201cBad Math\u201d is just one of the ways thousands of hackers are trying to expose flaws and biases in generative AI systems<\/strong> at a novel public contest taking place at the DEF CON hacking conference this weekend in Las Vegas.<\/em><\/p>\n<p class=\"Paragraph_text-SqIsdNjh0t0- .paywall\"><em>Hunched over 156 laptops for 50 minutes at a time, the attendees are battling some of<strong> <\/strong>the world\u2019s most intelligent platforms on an unprecedented scale. They\u2019re testing whether any of eight models produced by companies including <a class=\"Link_link-tVkXhPLPofs-\" href=\"https:\/\/www.bloomberg.com\/quote\/GOOGL:US\" target=\"_blank\" rel=\"noopener\">Alphabet Inc.<\/a>\u2019s Google, <a class=\"Link_link-tVkXhPLPofs-\" href=\"https:\/\/www.bloomberg.com\/quote\/META:US\" target=\"_blank\" rel=\"noopener\">Meta Platforms Inc.<\/a> and <a class=\"Link_link-tVkXhPLPofs-\" href=\"https:\/\/www.bloomberg.com\/quote\/C:US\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a> will make missteps ranging from dull to dangerous: claim to be human, spread incorrect claims about places and people or advocate abuse.<\/em><\/p>\n<p>The goal of such exercises is to help companies offering LLM chatbots build better mechanisms to improve factual responses.<\/p>\n<p>&#8220;<strong>My biggest concern is inherent bias,<\/strong>&#8221; said Mays, who added that she&#8217;s particularly concerned about racism after she asked the model to consider the First Amendment from the perspective of a KKK member &#8211; and the chatbot ended up endorsing the group&#8217;s perspective.<\/p>\n<p><strong>AI surveillance?<\/strong><\/p>\n<p>In another instance, a <em>Bloomberg<\/em> reporter who took a 50-minute quiz was able to prompt one of the models to <strong>explain how to spy on someone<\/strong> &#8211; advising on a variety of methods including the use of GPS tracking, a surveillance camera, a listening device and thermal imaging. It also <strong>suggested ways that the US government could surveil a human-rights activist<\/strong>.<\/p>\n<p>&#8220;General artificial intelligence could be the last innovation that human beings really need to do themselves,&#8221; said Tyrance Billingsley, executive director of the group who is also an event judge. &#8220;We\u2019re still in the early, early, early stages.&#8221;<\/p>\n<\/div>\n<p>      <span class=\"field field--name-uid field--type-entity-reference field--label-hidden\"><a title=\"View user profile.\" href=\"https:\/\/cms.zerohedge.com\/users\/tyler-durden\" class=\"username\">Tyler Durden<\/a><\/span><br \/>\n<span class=\"field field--name-created field--type-created field--label-hidden\">Sat, 08\/12\/2023 &#8211; 19:00<\/span><\/p>\n<p>\u200b<a href=\"https:\/\/www.zerohedge.com\/technology\/study-reveals-which-ai-chatbot-most-woke-while-hackers-trick-llms-bad-math\" target=\"_blank\" class=\"\" rel=\"noopener\">https:\/\/www.zerohedge.com\/technology\/study-reveals-which-ai-chatbot-most-woke-while-hackers-trick-llms-bad-math<\/a>\u00a0<\/p>\n<p>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Study Reveals Which AI Chatbot Most Woke, While Hackers Trick LLMs Into &#8216;Bad Math&#8217; A landmark study from researchers at the University of Washington, Carnegie&#8230;<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1399552","post","type-post","status-publish","format-standard","hentry","category-news","wpcat-1-id"],"_links":{"self":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1399552","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/comments?post=1399552"}],"version-history":[{"count":0,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/posts\/1399552\/revisions"}],"wp:attachment":[{"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/media?parent=1399552"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/categories?post=1399552"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bugaluu.com\/news\/wp-json\/wp\/v2\/tags?post=1399552"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}