可爱哆咪
Measles, which once killed 450 children each year and disabled even more,was nearly wiped out in the United States 14 years ago by the universal use ofthe MMR vaccine. But thedisease is making a comeback, caused by a growing anti-vaccine movement andmisinformation that is spreading quickly. Already this year, 115 measles caseshave been reported in the USA, compared with 189 for all of last year. 麻疹, 曾一度每年导致450名儿童死亡,甚至更多的儿童致残,14年前由于MMR疫苗的普遍使用,在美国几乎被消灭。不过这种疾病正在卷土重来,这是因为不断增长的反疫苗运动和迅速传播的错误信息造成的。今年美国已经报告了115例麻疹病例,而去年全年为189例。 The numbers might sound small, but they are the leading edge of a dangerous trend. When vaccination rates are very high, as they still are in the nation as a whole, everyone is protected. This is called “herd immunity”, which protects the people who get hurt easily, including those who can’t be vaccinated for medical reasons, babies too young to get vaccinated and people on whom the vaccine doesn’t work. 数字听起来可能很小,但它们代表一种危险趋势。当全国范围内作为一个整体疫苗接种率很高时,每个人才会受到保护。这就是所谓的“群体免疫”,保护那些易受感染的人,包括那些因医疗原因不能接种疫苗的人,年龄太小不能接种疫苗的婴儿,以及那些疫苗不起作用的人。 But herd immunity works only when nearly the whole herd joins in. When some refuse vaccination and seek a free ride, immunity breaks down and everyone is in even bigger danger. 但群体免疫只有在几乎整个群体都加入时才起作用。当一些人拒绝接种疫苗并寻求自由乘车时,群体免疫就会崩溃,每个人都会面临更大的风险。 That’s exactly what is happening in small neighborhoods around the country from Orange County, California, where 22 measles cases were reported this month, to Brooklyn, ., where a 17-year-old caused an outbreak last year. 从本月报告了22例麻疹病例的加利福尼亚州奥兰治县到去年一名17岁青年引发了一场麻疹疫情的纽约州布鲁克林,全国各地的小社区都是如此。 The resistance to vaccine has continued for decades, and it is driven by a real but very small risk. Those who refuse to take that risk selfishly make others suffer. 对疫苗的反对已经持续了几十年,因为存在真实但非常小的风险。那些拒绝冒险的人让别人承受痛苦,非常自私。 Making things worse are state laws that make it too easy to opt out of what are supposed to be required vaccines for all children entering kindergarten. Seventeen states allow parents to get an exemption, sometimes just by signing apaper saying they personally object to a vaccine. 更糟糕的是,州法律使得所有进入幼儿园的儿童放弃接种疫苗很容易。17个州允许父母选择放弃, 有时候,只需要签署一份声明,说他们个人反对疫苗。 Now, several states are moving to tighten laws by adding new regulations for opting out. But no one does enough to limit exemptions. 现在,有几个州正在加强法律,为退出接种增加新的规定。但没有人采取措施来限制放弃接种。 Parents ought to be able to opt out only for limited medical or religious reasons. But personal opinions? Not good enough. Everyone enjoys the life-saving benefits vaccines provide, but they’ll exist only as long as everyone shares in the risks. 父母应该只能因为有限的医疗或宗教原因选择放弃。但是个人意见呢?还不够充分。每个人都享受疫苗提供的拯救生命的好处,但只要每个人都分担风险,疫苗才会存在。
静静地过
Hollywood’s theory that machines with evil minds will drive armies of killer robots is just silly. The real problem relates to the possibility that artificial intelligence(AI) may become extremely good at achieving something other than what we really want. In 1960 a well-known mathematician NorbertWiener, who founded the field of cybernetics, put it this way: “If we use, to achieve our purposes, a mechanical agency with whose operation we can not effectively interfere, we had better be quite sure that the purpose which we really desire.” 好莱坞的理论认为有着邪恶头脑的机器会成为杀手机器人大军,这太愚蠢了。这一可能性的真正问题在于,AI(人工智能)可能会变得非常擅长于实现某些东西,不仅是我们真正想要的。1960年,著名数学家诺伯特维纳创立了控制论领域, 提到:“如果我们为了达到我们的目的而使用一种我们无法有效干预其运作的机械装置,我们最好确定我们真正想要的目的。” A machine with a specific purpose has another quality, one that we usually associate with living things: a wish to preserve its own existence. For the machine, this quality is not in-born, nor is it something introduced by humans; it is a logical consequence of the simple fact that the machine can not achieve its original purpose if it is dead. So if we send out a robot with the single instruction of fetching coffee, it will have a strong desire to secure success by disabling its own off switch or even killing anyone who might interfere with its task. If we are not careful, then, we could face a kind of global chess match against very determined, super intelligent machines whose objectives conflict with our own, with the real world as the chess board. 有特定目的的机器还有另一种特性,我们通常把它与生物联系在一起:希望保留自己的存在。对于机器来说,这种特性不是天生的,也不是由人类引入的;如果机器死了,就无法达到其原始目的,这就是这一简单事实的逻辑化结果。因此,如果我们给机器人发送一条取咖啡的简单指令,它就会有强烈的愿望,禁止关闭自己的开关,甚至杀死任何可能干扰其任务的人,来确保成功。如果我们不小心,那么,我们可能会面临一场全球象棋比赛,棋盘就是现实世界,对手是异常坚定,其目标与我们的目标冲突的超级智能机器。 The possibility of entering into and losing such a match should concentrating the minds of computer scientists. Some researchers argue that we can seal the machines inside a kind of firewall, using them to answer difficult questions but never allowing them to affect the real world. Unfortunately, that plan seems unlikely to work: we have yet to invent a firewall that is secure against ordinary humans, let alone super intelligent machines. 参加并输掉这样一场比赛的可能性应该会引起计算机科学家的注意。一些研究人员认为,我们可以将这些机器密封在一种防火墙内,用它们解决困难的问题,但决不允许它们影响现实世界。不幸的是,这一计划似乎不太可能实现:我们还没有发明一种防火墙来保护普通人,更不用说超级智能机器了。 Solving the safety problem well enough to move forward in AI seems to be possible but not easy. There are probably decades in which to plan for the arrival of super intelligent machines. But the problem should not be dismissed out of hand, as it has been by some AI researchers. Some argue that humans and machines can coexist as long as they work in teams—yet that is not possible unless machines share the goals of humans. Others say we can just “switch them off” as if super intelligent machines are too stupid to think of that possibility. Still others think that super intelligent AI will never happen. On September11, 1933, famous physicist Ernest Rutherford stated, with confidence, “Anyone who expects a source of power in the transformation of these atoms is talking moonshine.” However, on September 12, 1933, physicist Leo Szilard invented the neutron-induced nuclear chain reaction. 很好地解决安全问题以推动AI的发展似乎是可能的,但并不容易。几十年内为超级智能机器的到来做好计划是可能的,但这个问题不应该像一些AI研究人员所做的那样,随意被忽视。有些人认为,人和机器可以共存,只要他们在团队中工作,但这是不可能的,除非机器与人类的目标相同。也有人说我们可以“关掉它们”,好像超级智能机器很蠢,不会想到这种可能性。还有人认为超级智能AI永远不会发生。1933年9月11日,著名物理学家欧内斯特·卢瑟福满怀信心地说,“任何期望这些原子转化过程成为能量来源的人都在胡说八道。”然而,1933年9月12日,物理学家利奥·西拉德发明了中子诱导链式核反应。