{"id":14537,"date":"2025-08-20T21:27:30","date_gmt":"2025-08-20T21:27:30","guid":{"rendered":"https:\/\/nationalgunowner.org\/index.php\/2025\/08\/20\/chief-mustafa-suleyman-sounds-alarm-on-seemingly-conscious-a-i\/"},"modified":"2025-08-20T21:27:30","modified_gmt":"2025-08-20T21:27:30","slug":"chief-mustafa-suleyman-sounds-alarm-on-seemingly-conscious-a-i","status":"publish","type":"post","link":"https:\/\/nationalgunowner.org\/index.php\/2025\/08\/20\/chief-mustafa-suleyman-sounds-alarm-on-seemingly-conscious-a-i\/","title":{"rendered":"Chief Mustafa Suleyman Sounds Alarm on \u2018Seemingly Conscious A.I.\u2019"},"content":{"rendered":"<div itemprop=\"articleBody\">\n<figure id=\"attachment_1571826\" aria-describedby=\"caption-attachment-1571826\" style=\"width: 970px\" class=\"wp-caption alignnone\"><img fetchpriority=\"high\" decoding=\"async\" class=\"size-full-width wp-image-1571826\" src=\"https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?quality=80&amp;w=970\" alt=\"Man in white button up sweater stands onstage\" width=\"970\" height=\"647\" srcset=\"https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg 8640w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=300,200 300w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=768,512 768w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=635,423 635w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=1536,1024 1536w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=2048,1365 2048w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=970,647 970w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=320,213 320w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=1920,1280 1920w, https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?resize=50,33 50w\" sizes=\"(max-width: 600px) 300px, 620px\"\/><figcaption id=\"caption-attachment-1571826\" class=\"wp-caption-text\">Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. <span class=\"media-credit\">Stephen Brashear\/Getty Images<\/span><\/figcaption><\/figure>\n<p>Will A.I. systems ever achieve human-like \u201cconsciousness?\u201d Given the field\u2019s rapid pace, the answer is likely yes, according to <a href=\"https:\/\/observer.com\/company\/microsoft\/\" title=\"Microsoft\" class=\"company-link\">Microsoft<\/a> AI CEO <a href=\"https:\/\/observer.com\/person\/mustafa-suleyman\/\" title=\"Mustafa Suleyman\" class=\"company-link\">Mustafa Suleyman<\/a>. In a new essay published yesterday (Aug. 19), he described the emergence of <a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\">\u201cseemingly conscious A.I.\u201d <\/a>(SCAI) as a development with serious societal risks. \u201cSimply put, my central worry is that many people will start to believe in the illusion of A.I.s as conscious entities so strongly that they\u2019ll soon advocate for A.I. rights, model welfare and even A.I. citizenship,\u201d he wrote. \u201cThis development will be a dangerous turn in A.I. progress and deserves our immediate attention.\u201d<\/p>\n<section class=\"wp-block-observer-newsletters observer-newsletters--in-content\">\n<\/section>\n<p>Suleyman is particularly concerned about the prevalence of A.I.\u2019s \u201cpsychosis risk,\u201d an issue that\u2019s picked up steam across Silicon Valley in recent months as <a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\">users reportedly lose touch with reality<\/a> after interacting with generative A.I. tools. \u201cI don\u2019t think this will be limited to those who are already at risk of mental health issues,\u201d Suleyman said, noting that \u201csome people reportedly believe their A.I. is God, or a fictional character, or fall in love with it to the point of absolute distraction.\u201d<\/p>\n<p><a href=\"https:\/\/observer.com\/company\/openai\/\" title=\"OpenAI\" class=\"company-link\">OpenAI<\/a> CEO <a href=\"https:\/\/observer.com\/person\/sam-altman\/\" title=\"Sam Altman\" class=\"company-link\">Sam Altman<\/a> has expressed similar worries about users forming strong emotional bonds with A.I. After OpenAI temporarily cut off access to its GPT-4o model earlier this month to make way for GPT-5, users <a href=\"https:\/\/observer.com\/2025\/08\/openai-bring-back-gpt4\/\">voiced widespread disappointment<\/a> over the loss of\u00a0the predecessor\u2019s conversational and effusive personality.<\/p>\n<p>\u201c<a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/x.com\/sama\/status\/1954703747495649670\">I can imagine a future where a lot of people really trust ChatGPT\u2019s advice for their most important decisions<\/a>,\u201d said Altman in a recent post on X. \u201cAlthough that could be great, it makes me uneasy.\u201d<\/p>\n<p>Not everyone sees it as a red flag. <a href=\"https:\/\/observer.com\/person\/david-sacks\/\" title=\"David Sacks\" class=\"company-link\">David Sacks<\/a>, the Trump administration\u2019s \u201cA.I. and Crypto Czar,\u201d likened concerns over A.I. psychosis to past moral panics around social media. \u201c<a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/www.youtube.com\/watch?v=Oy4evu1TdiI\">This is just a manifestation or outlet for pre-existing problems<\/a>,\u201d said Sacks earlier this week on the\u00a0<em>All-In Podcast<\/em>.<\/p>\n<p>Debates will only grow more complex as A.I.\u2019s capabilities advance, according to Suleyman, who oversees Microsoft\u2019s consumer A.I. products like Copilot. Suleyman co-founded DeepMind in 2010 and later launched <a href=\"https:\/\/observer.com\/company\/inflection-ai\/\" title=\"Inflection AI\" class=\"company-link\">Inflection AI<\/a>, a startup largely absorbed by Microsoft last year.<\/p>\n<p>Building an SCAI will likely become a reality in the coming years. To achieve the illusion of a human-like consciousness, A.I. systems will need language fluency, empathetic personalities, long and accurate memories, autonomy and goal-planning abilities\u2014qualities already possible with large language models (LLMs) or soon to be.<\/p>\n<p>While some users may treat SCAI as a phone extension or pet, others \u201cwill come to believe it is a fully emerged entity, a conscious being deserving of real moral consideration in society,\u201d said Suleyman. He added that \u201cthere will come a time when those people will argue that it deserves protection under law as a pressing moral matter.\u201d<\/p>\n<p>Some in the A.I. field are already exploring \u201cmodel welfare,\u201d a concept aimed at extending moral consideration to A.I. systems. <a href=\"https:\/\/observer.com\/company\/anthropic\/\" title=\"Anthropic\" class=\"company-link\">Anthropic<\/a> launched a research program in April to investigate model welfare and interventions. Earlier this month, the startup its Claude Opus 4 and 4.1 models the ability to end harmful or abusive user interactions after observing \u201c<a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/www.anthropic.com\/research\/end-subset-conversations\">a pattern of apparent distress<\/a>\u201d in the systems during certain conversations.<\/p>\n<p>Encouraging principles like model welfare \u201cis both premature, and frankly dangerous,\u201d according to Suleyman. \u201cAll of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, increase new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.\u201d<\/p>\n<p>To prevent SCAIs from becoming commonplace, A.I. developers should avoid promoting the idea of conscious A.I.s and instead design models that minimize signs of consciousness or human empathy triggers. \u201cWe should build A.I. for people; not to be a person,\u201d said Suleyman.<\/p>\n<p>\t\t\t\t<img decoding=\"async\" itemprop=\"image\" src=\"https:\/\/observer.com\/wp-content\/uploads\/sites\/2\/2025\/08\/GettyImages-2207866106.jpg?quality=80&amp;w=970\" alt=\"Microsoft A.I. Chief Mustafa Suleyman Sounds Alarm on \u2018Seemingly Conscious A.I.\u2019\" style=\"display:none;width:0;\"\/><\/p><\/div>\n<p><script>\n\t!function(f,b,e,v,n,t,s)\n\t{if(f.fbq)return;n=f.fbq=function(){n.callMethod?\n\t\tn.callMethod.apply(n,arguments):n.queue.push(arguments)};\n\t\tif(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';\n\t\tn.queue=[];t=b.createElement(e);t.async=!0;\n\t\tt.src=v;s=b.getElementsByTagName(e)[0];\n\t\ts.parentNode.insertBefore(t,s)}(window, document,'script',\n\t\t'https:\/\/connect.facebook.net\/en_US\/fbevents.js');\n\tfbq('init', '618909876214345');\n\tfbq('track', 'PageView');\n<\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Mustafa Suleyman joined Microsoft last year to head up its consumer A.I. efforts. Stephen Brashear\/Getty Images Will A.I. systems ever achieve human-like \u201cconsciousness?\u201d Given the field\u2019s rapid pace, the answer is likely yes, according to Microsoft AI CEO Mustafa Suleyman. In a new essay published yesterday (Aug. 19), he described the emergence of \u201cseemingly conscious [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"tdm_status":"","tdm_grid_status":"","footnotes":""},"categories":[10],"tags":[],"class_list":{"0":"post-14537","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-usa-news"},"_links":{"self":[{"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/posts\/14537","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/comments?post=14537"}],"version-history":[{"count":0,"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/posts\/14537\/revisions"}],"wp:attachment":[{"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/media?parent=14537"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/categories?post=14537"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nationalgunowner.org\/index.php\/wp-json\/wp\/v2\/tags?post=14537"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}