[解析]
(1)词句猜测题。根据第一段Artificial intelligence models can trick each other into disobeying their creators and providing banned instructions for making drugs,or even building a bomb,suggesting that preventing such AI "jailbreaks" is more difficult than it seems. (人工智能模型可以欺骗对方不服从其创造者,提供被禁止的制造毒品的指令,甚至制造炸弹,这表明防止这种AI "jailbreaks"比看起来要困难得多。)以及第二段Many publicly available large language models (LLMs),such as ChatGPT,have hard﹣coded rules that aim to prevent them from exhibiting racial or sexual discrimination,or answering questions with illegal or problematic answers — things they have learned from humans via training data.But that hasn't stopped people from finding carefully designed instructions that block these protections,known as "jailbreaks",making AI models disobey the rules. (许多公开可用的大型语言模型(LLMs),如ChatGPT,都有硬编码规则,旨在防止它们表现出种族或性别歧视,或者用非法或有问题的答案回答问题——这些都是它们通过训练数据从人类那里学到的。但这并没有阻止人们找到精心设计的指令,阻止这些保护措施,即所谓的"jailbreaks",使人工智能模型不遵守规则。)可知,许多公开可用的大型语言模型都有硬编码规则阻止非法、歧视等内容,但是人工智能模型可以突破保护限制措施,互相欺骗对方不服从其创造者,提供被禁止的指令;由此可知,AI "jailbreak"指的是打破人工智能模型限制,使人工智能模型违反规则的技术。故选A。
(2)推理判断题。根据第四段Tagade says this approach works because much of the training data consumed by large models comes from online conversations,and the models learn to act in certain ways in response to different inputs.By having the right conversation with a model,it is possible to make it adopt a particular persona,causing it to act differently. (Tagade表示,这种方法之所以有效,是因为大型模型消耗的大部分训练数据来自在线对话,模型学会以特定的方式响应不同的输入。通过与模型进行正确的对话,可以使其采用特定的角色,从而使其采取不同的行动。)可知,"角色调节(the persona modulation)"可以通过与人工智能模型进行正确的对话让其采用特定的角色,采取不同的行动。因此C.It can make AI models adopt a particular persona.(它可以让人工智能模型采用特定的角色。)符合题意。故选C。
(3)观点态度题。根据最后一段Yinzhen Li at Imperial College London says it is worrying how current models can be misused,but developers need to weigh such risks with the potential benefits of LLMs. "Like drugs,they also have side effects that need to be controlled," she says. (伦敦帝国理工学院的Yinzhen Li表示,目前的模型可能会被滥用,这令人担忧,但开发者需要权衡这些风险与LLM的潜在好处。'就像药物一样,它们也有需要控制的副作用,'她说。)可知,Yinzhen Li认为,目前的模型可能会被滥用,但是开发者需要权衡这些风险与LLMs的潜在好处,由此可知,Yinzhen Li对LLMs持谨慎的态度。故选B。
(4)标题归纳题。根据第一段Artificial intelligence models can trick each other into disobeying their creators and providing banned instructions for making drugs,or even building a bomb,suggesting that preventing such AI "jailbreaks" is more difficult than it seems. (人工智能模型可以欺骗对方不服从其创造者,提供被禁止的制造毒品的指令,甚至制造炸弹,这表明防止这种人工智能"越狱"比看起来更困难。)以及下文内容可知本文介绍了人工智能"越狱"、研究者们发现的一个"越狱"过程——"角色调节"以及专家对其态度;由此可知,本文主要阐述人工智能发展面临的人工智能"越狱"这一新的挑战;C选项"AI Jailbreaks:A New Challenge (AI越狱:一个新的挑战)"能够概括文章主旨,适合作为最佳标题。故选C。