首页 > 知识 > 剑桥雅思18阅读真题电子版Test2 Passage2

剑桥雅思18阅读真题电子版Test2 Passage2

彭静 2024-08-20 16:36:16

剑桥雅思18共收集了雅思真题4套,羊驼雅思整理了第二套阅读真题答案及解析Test 2 Reading,以下是阅读第二篇文章Passage2的文章原文,供烤鸭们复习参考。

剑桥雅思18真题答案解析汇总,剑雅18难度分析

剑桥雅思18听力和阅读真题答案及解析汇总查看剑桥雅思18真题答案解析汇总,为助力考生们备考雅思,羊驼雅思APP推出剑桥雅思真题及答案解析视频,可在羊驼雅思APP课程一栏获取剑雅系列视频信息。

READING PASSAGE  2

You should spend about 20 minutes on Questions 14-26. which are based on Reading Passage 2 below.

Living with artificial intelligence

Powerful artificial intelligence (Al) needs to be reliably aligned with human values, but does this mean Al will eventually have to police those values?

This has been the decade of Al, with one astonishing feat after another. A chess-playing Al that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That's yesterday's news, what's next? True, these prodigious accomplishments are all in so-called narrow Al, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGl)-machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.

If so, there's little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence, Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. lt is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines? On the plus side, Al is already useful and profitable for many things, and super Al might be expected to be super useful, and super profitable. But the more powerful Al becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences-King Midas, for example, might have wished that everything he touched turned to gold, but didn't really intend this to apply to his breakfast.

So we need to create powerful Al machines that are "human-friendly'- that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. lf superintelligent machines don't do a lot better than us, we'l be in deep trouble. We'l have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety's sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they'll be smart enough for the job.

If there are routes to the moral high ground, they'll be better than us at finding them, and steering us in the right direction.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The 'getting started' problem is that we need to tell the machines what they're looking for with sufficient clarity that we can be confident they will find it - whatever 'it' actually turns out to be. This won't be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly, How then, do we point machines in the direction of something better?

As for the 'destination' problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy- an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example.

Loss of freedom to behave badly isn't always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won't notice them; but few of us are likely to welcome such a future.

These issues might seem far-fetched, but they are to some extent already here. Al already has some input into how resources are used in our National Health Service (NHS) here in the UK, for example. lf it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. However, we'd be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we'd want to ensure that people are treated equally and that policies are fair, the goals of Al would need to be specified correctly.

We have a new powerful technology to deal with - itself, literally, a new way of thinking For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. lt is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest.

Both general inteligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we'll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.

剑雅17-19听力阅读真题答案及解析汇总

参考译文

与人工智能共存

强大的人工智能(Al)需要与人类价值观可靠地保持一致,但这是否意味着Al最终将不得不监管这些价值观?

这是Al的十年,取得了一个又一个惊人的成就。一个下棋的Al,在短短四个小时内学习完游戏后,不仅可以击败所有人类棋手,还可以击败所有之前人类编程的象棋机?这是昨天的新闻,接下来呢?诚然,这些惊人的成就都发生在所谓的狭义人工智能中,机器执行高度专业化的任务。但许多专家认为,这种限制是暂时的。到本世纪中叶,我们可能会拥有通用人工智能(AGl)——能够在我们自己可以处理的所有任务上实现人类水平的表现的机器。

如果是这样,就没有理由认为它会就此止步。机器将摆脱人类智能的许多物理限制,我们的大脑在灯泡的功率下以缓慢的生化处理速度运行,它们的大小受到人类产道尺寸的限制。考虑到这些障碍,他们所取得的成就是非凡的。但它们可能离思维的物理极限很远,就像我们的眼睛离强大得令人难以置信的韦伯太空望远镜一样。

一旦机器在设计更智能的机器方面比我们更好,朝着这些极限的进展可能会加速。这对我们意味着什么?我们能否确保与这些机器安全且有价值地共存?从好的方面来看,Al已经在很多方面有用且有利可图,超级Al可能会非常有用且非常有利可图。但是,Al变得越强大,就越重要的是要非常谨慎地明确其目标。民间传说中充满了这样的故事:人们要求的东西是错误的,后果是灾难性的。例如,迈达斯国王可能希望他触摸到的一切都变成金子,但他并没有真的打算把这应用于他的早餐。

因此,我们需要创造“对人类友好”的强大人工智能机器,这些机器的目标与我们自己的价值观可靠地一致。使这项任务变得困难的一件事是,我们自己也远非可靠地对人类友好。我们对彼此以及与我们共享地球的许多其他生物做了许多可怕的事情。如果超级智能机器没有比我们做得好很多,我们就会陷入困境。我们将拥有强大的新智能,放大我们自身易出错本性的黑暗面。

因此,为了安全起见,我们希望机器在道德和认知上都是超人。我们希望他们瞄准道德制高点,而不是我们许多人花费一些时间的低谷。幸运的是,他们足够聪明,可以胜任这份工作。

如果有通往道德高地的道路,他们会比我们更好地找到它们,并引导我们朝着正确的方向前进。

然而,这种乌托邦式的愿景存在两大问题。一个是我们如何在旅途中启动机器,另一个是到达这个目的地意味着什么。“开始”的问题是,我们需要足够清楚地告诉机器他们在寻找什么,这样我们就可以相信他们会找到它——无论“它”实际上是什么。这并不容易,因为我们是部落生物,自己也对理想感到矛盾。我们经常忽视陌生人的痛苦,甚至至少间接地加剧了这种痛苦。那么,我们如何将机器指向更好的方向呢?

至于“目的地”问题,我们可能会把自己交给这些道德指南和看门人,从而牺牲自己的自主权——这是我们成为人类的重要组成部分。在坚守道德制高点方面比我们更好的机器可能会阻止我们目前认为理所当然的一些失误。例如,我们可能会失去歧视自己社区的自由。

当然,失去行为不端的自由并不总是坏事:剥夺自己让孩子在工厂工作或在餐馆吸烟的自由是进步的标志。但是,我们准备好接受道德硅警察限制我们的选择了吗?他们可能非常擅长做这件事,以至于我们不会注意到他们;但我们中很少有人会欢迎这样的未来。

这些问题可能看起来有些牵强,但在某种程度上已经存在了。例如,Al已经对英国国家医疗服务体系(NHS)的资源使用方式提出了一些意见。如果赋予它更大的作用,它可能会比人类能够管理的更有效,并为纳税人和使用卫生系统的人的利益行事。然而,我们将剥夺一些人(如高级医生)目前享有的控制权。由于我们希望确保人们得到平等对待,政策是公平的,因此需要正确指定Al的目标。

我们有一项新的强大技术需要应对,从字面上讲,这是一种新的思维方式。为了我们自己的安全,我们需要为这些新思想家指明正确的方向,并让他们为我们做好行动。目前尚不清楚这是否可能,但如果可能的话,这将需要合作精神,以及搁置自身利益的意愿。

一般智慧和道德推理通常被认为是人类独有的能力。但安全似乎要求我们将它们视为一个整体:如果我们要给机器提供通用智能,我们也需要给它们道德权威。那么,这究竟会给人类带来什么?现在更有理由考虑目的地,并小心我们的愿望。

相关文章

2024年11月23日雅思纸笔考试题目及答案

​2024年雅思纸笔考试在11月有4个考试日期安排,时间分别在11月2日、11月16日、11月23日和11月30日,下文小编整理了2024年11月23日雅思纸笔考试题目和答案如下。
雅思机经 雅思写作
11-25
2024年11月23日雅思纸笔考试题目及答案

雅思阅读听力评分对照表

雅思阅读和听力都是客观题,考试得分按照答对题目数量来计算。以下是羊驼雅思整理的雅思阅读和听力的评分对照表、阅读和听力评分标准说明、以及阅读和听力是否会根据考试难度调分的分析。
雅思阅读 雅思听力
11-25
雅思阅读听力评分对照表

雅思阅读匹配题题型视频介绍:句子填空题

哈喽,大家好,欢迎来到羊驼雅思,那今天呢,我们讲解的内容是雅思阅读中的填空题里的句子填空题,那话不多说,我们进入正题。
雅思阅读 雅思题型
11-25
雅思阅读匹配题题型视频介绍:句子填空题

雅思阅读对25个多少分

雅思阅读考试共有三个部分文章、40道题,最后得分根据答对题数进行计算,其中学术A类阅读和培训G类阅读在答对题数和对应得分之间又稍有差异。以下是羊驼雅思整理的雅思阅读对25个多少分、雅思阅读多少分及格、阅读评分对照表的详细内容。
雅思阅读 雅思分数
11-25
雅思阅读对25个多少分

雅思阅读判断对错题技巧有哪些

本文详细介绍了雅思阅读判断对错题的多种解题技巧,包括理解题目核心意思、精确定位文章信息、区分True、False和Not Given、注意细节和逻辑关系、避免过度解读、多做练习和总结、时间管理和保持冷静。这些技巧有助于考生在考试中更准确地解答判断对错题。
雅思阅读
11-22
雅思阅读判断对错题技巧有哪些