Record Details
Book cover

Superintelligence : paths, dangers, strategies

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. -- Source other than Library of Congress.

Book  - 2016
006.301 Bos
1 copy / 0 on hold

Available Copies by Location

Location
Community Centre Available

Other Formats

  • ISBN: 9780198739838
  • ISBN: 0198739834
  • Physical Description xvi, 415 pages : illustrations ; 20 cm
  • Publisher Oxford, United Kingdom ; Oxford University Press, 2016.

Content descriptions

Bibliography, etc. Note:
Includes bibliographical references (383-406) and index.
Formatted Contents Note:
1. Past developments and present capabilities -- 2. Paths to superintelligence -- 3. Forms of superintelligence -- 4. The kinetics of an intelligence explosion -- 5. Decisive strategic advantage -- 6. Cognitive superpowers -- 7. The superintelligent will -- 8. Is the default outcome doom? -- 9. The control problem -- 10. Oracles, genies, sovereigns, tools -- 11. Multipolar scenarios -- 12. Acquiring values -- 13. Choosing the criteria for choosing -- 14. The strategic picture -- 15. Crunch time.

Additional Information

LDR 03395cam a22003617i 4500
001260825
003NFPL
00520180928140926.0
008151104r20162014enka e b 001 0 eng d
010 . ‡a 2015956648
020 . ‡a9780198739838 ‡q(paperback)
020 . ‡a0198739834 ‡q(paperback)
035 . ‡a(OCoLC)ocn945184787
040 . ‡aCDX ‡beng ‡erda ‡cCDX ‡dOCLCO ‡dBDX ‡dYDXCP ‡dEQO ‡dOCLCO ‡dOCLCF ‡dMIQ ‡dOCLCO ‡dBKL ‡dDLC
042 . ‡alccopycat
08204. ‡a006.301 ‡223
1001 . ‡aBostrom, Nick, ‡d1973- ‡eauthor. ‡0(NFPL)24545
24510. ‡aSuperintelligence : ‡bpaths, dangers, strategies / ‡cNick Bostrom, Director, Future of Humanity Institute, Director, Strategic Artificial Intelligence Research Centre, Professor, Faculty of Philosophy & Oxford Martin School, University of Oxford.
264 1. ‡aOxford, United Kingdom ; ‡aNew York, NY : ‡bOxford University Press, ‡c2016.
300 . ‡axvi, 415 pages : ‡billustrations ; ‡c20 cm
336 . ‡atext ‡btxt ‡2rdacontent
337 . ‡aunmediated ‡bn ‡2rdamedia
338 . ‡avolume ‡bnc ‡2rdacarrier
504 . ‡aIncludes bibliographical references (383-406) and index.
50500. ‡g1. ‡tPast developments and present capabilities -- ‡g2. ‡tPaths to superintelligence -- ‡g3. ‡tForms of superintelligence -- ‡g4. The ‡tkinetics of an intelligence explosion -- ‡g5. ‡tDecisive strategic advantage -- ‡g6. ‡tCognitive superpowers -- ‡g7. The ‡tsuperintelligent will -- ‡g8. ‡tIs the default outcome doom? -- ‡g9. The ‡tcontrol problem -- ‡g10. ‡tOracles, genies, sovereigns, tools -- ‡g11. ‡tMultipolar scenarios -- ‡g12. ‡tAcquiring values -- ‡g13. ‡tChoosing the criteria for choosing -- ‡g14. The ‡tstrategic picture -- ‡g15. ‡tCrunch time.
520 . ‡aThe human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence. -- ‡cSource other than Library of Congress.
650 0. ‡aArtificial intelligence ‡xPhilosophy. ‡0(NFPL)119096
650 0. ‡aArtificial intelligence ‡xSocial aspects. ‡0(NFPL)119097
650 0. ‡aArtificial intelligence ‡xMoral and ethical aspects.
650 0. ‡aComputers and civilization. ‡0(NFPL)94031
650 0. ‡aCognitive science.
904 . ‡aMARCIVE 2023
901 . ‡a260825 ‡bAUTOGEN ‡c260825 ‡tbiblio