Manual Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1)

Free download. Book file PDF easily for everyone and every device. You can download and read online Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1) book. Happy reading Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1) Bookeveryone. Download file Free Book PDF Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Review & Analysis of Kurzweils How To Create A Mind (The Gist Of It Club Book 1) Pocket Guide.

Books, Audiobooks and Summaries. Show me a computer capable of thinking, writing symphonies, loving, etc. Because you will need to breed a whole new race of pigs in a decade or so. The rest should read it to find what all the fuss is about. Because even if you know nothing about AI and neuroscience, this may be a good time to start learning about it. Ray Kurzweil is a prize-winning scientist, writer, and futurist. He has invented numerous things, ranging from the first omni-font OCR optical character recognition to the first print-to-speech reading machine for the blind, from the first flatband scanner to the first commercial text-to-speech synthesizer.

Ladies and gentlemen, please join us in unraveling the secret of human thought with the one and only Ray Kurzweil , aka the guy who gave humanity flatbad scanners, optical character recognition, print-to-speech reading machines, and text-to-speech synthesizers!

Ray Kurzweil: "How to Create a Mind" - Talks at Google

In a nutshell — someone who definitely knows more than most about how our brain may function, based on his work with artificial brains. But, before, try with us another thought experiment. Kurzweil thinks that these thought experiments reveal something much more than the fact that, essentially, your memory sucks. And that this should give us a hint on how our brain is actually doing its job. You thought that only computers follow specific algorithms? So much so that, in fact, human consciousness pioneer Benjamin Libet has proposed that even your free will may be an illusion! Since, according to him, these experiments show that your brain is also merely — OK, in strictly relative terms — doing hierarchical statistical analysis.

And by brain, we actually mean your neocortex, which, according to Kurzweil is where the magic actually happens. The unfamiliar and often elaborate and arcane presentation of the results of statistical study in stylistics should not obscure what it has in common with traditional practices. It is common in any analysis to move from relatively uncontroversial to relatively controversial observations. This definition of some readily agreed-on aspects of a text is a prelude to interpretation. These results may be gathered by hand, as in the observation that plays written by women tend to have more women characters, and that women characters in them are more likely to start and end a scene.

In principle, work in computational stylistics is no different. In recent years the most-used markers for computational stylistics have been function words. Other very common words which do not appear to be unduly sensitive to subject matter are also attractive. Then there are combinations of word-types: in Lancashire's terminology, the fixed phrase, collocation pair of words within a certain number of words of each other , and word cluster combination of word phrase and collocation Lancashire : Lancashire's understanding of language production as mostly instinctive, and based on an associative memory, leads him to highlight phrases as the key markers of idiolect — phrases of up to seven words since that is the limit of short-term or working memory.

The existence of large commercial online full-text databases makes it now possible to use this kind of authorship marker against the background of a large corpus, which can show if any parallels between the phrases of a doubtful text and those of a target author are truly unusual. There are some sample studies in Jackson Among the other most important style markers have been choices among pairs of words which can be readily substituted for each other in a sentence, such as while and whilst, has and hath, on and upon , and the words used to begin sentences.

Chapter 2 - Computers and Intelligence

McMenamin details a vast range of these markers; the range itself, and the dangers of arbitrary choices among them, have contributed to doubts about the overall validity of such studies Furbank and Owens The best method of guarding against these dangers is careful testing of the method alongside its application to the immediate problem.

If the markers and procedures chosen provide an efficient separation of samples into authorial or other groups in a closely comparable set — ideally, drawn at random from the main set and then reserved for testing — then that provides a guide to the reliability of the method when it comes to the contested texts. Discriminant analysis, a method for data reduction with some similarities to PCA, provides a good illustration.

Discriminant analysis provides a weighted vector which will maximize the separation between two groups of samples named in advance. The vector can then be held constant and a score can be worked out for a mystery segment. The procedure even supplies a probability that any given segment belongs to one group or the other. This seems ideal for authorship problems: simply provide some samples of author A, some of author B, calculate the Discriminant function, and then test any doubtful segment.

Yet Discriminant results are notoriously optimistic: the separation of the segments into named groups is maximized, but we have no way of knowing if they are representative of the population of the works of Author A or B — which, in the fullest sense, is the range of works it was or is possible for Author A or B to write. There is the danger of "overtraining" — the method will work superbly for these particular samples, but what it is providing is exactly a separation of those samples, which may be a very different matter from a true authorial separation. Good practice, therefore, is to reserve some samples as many as 10 percent as test samples.

They do not contribute to the "training" exercise, the formation of the function, and therefore are in the same position as the doubtful sample or samples. If these are correctly assigned to their group, or most of them are, then one can have some confidence that the result on the mystery sample is reliable. The ease of use of statistical packages makes it possible to do a large amount of testing even with scarce data by "bootstrapping" — a single item can be withdrawn from the training set and tested, then returned to the set while a second is extracted and tested, and so on.

The claim for computational stylistics is that it makes available a class of evidence not otherwise accessible i. This evidence is comparable to the evidence interpreters always use, if not to form their views, then at least to persuade others that they are true. The computational varieties of stylistic analysis and authorship studies require some considerable immersion in traditional humanities disciplines scholarly, i.

All three present enough challenges and large enough bodies of knowledge to occupy the working lifetimes of individual researchers by themselves. Most commonly, single practitioners are experts in one or possibly two areas, very rarely in three, and have to make do in various more or less satisfactory ways in a third. Often the requisite skills have been assembled in two practitioners as in the case of Burrows and Love. Authorship attribution is as old as writing itself, and its history displays a fascinating variety of problems and solutions.

Groupings of texts Homer, the Bible, Shakespeare may have been created at times when their coherence was not especially significant, but later generations have placed enormous importance on the difference between the canonical and the apocryphal in each case. The modern interest in authorship attribution derives from the Renaissance, when the availability of texts made comparative study possible, and a new critical spirit went with the linguistic and textual disciplines of Humanism.

The demonstration by Lorenzo Valla in the fifteenth century that the Donation of Constantine, which gave the western part of the Roman Empire to Pope Sylvester, was a forgery, is perhaps the most famous example Love : 18— In the modern era most texts come securely attributed on external evidence. The title page of the first edition announces the author, or some knowledgeable contemporary assigns the work to the author.

Some exuberant authors include their name in the text itself, as Ben Jonson does in his ode to Lucius Gary and Henry Morison. These attributions may be a little deceptive in their straightforwardness — there are collaborators, editors, and writers of source materials to complicate matters — but for reasons of economy of effort such approximations Charles Dickens wrote Great Expectations are allowed to stand. Then for texts without this explicit and decisive external evidence there are numerous considerations of topic, approach, attitudes, imagery, turns of phrase, and so on which have always served scholars as foundations for attributions.

The balance between the credibility of this internal evidence and the external kind has swung back and forth. The s were a low point for internal evidence, as a reaction to the undisciplined accumulation of parallel passages and the wholesale "disintegration" of canons like Shakespeare's. This skepticism about internal evidence can be seen in Schoenbaum's book and in the Erdman and Fogel collection.

Humanities computing is involved with a second generation of attribution based on internal evidence, depending on measuring stylistic features in the doubtful and other texts, and comparing the results. The only evidence in favor of the attribution was internal, and stylistic in the specialized sense, being quantitative on the one hand and concerned with largely unnoticed idiosyncracies of language on the other.

Both sides of the debate agreed that the poem's style, in the more usual sense of the imagery, diction, and language use noticed by readers, was unlike canonical Shakespeare. The proponents of the attribution argued that since the quantitative and stylistic evidence was so strong, the generally held view of the "Shakespearean" would just have to change. Recently, another candidate, John Ford, has been proposed, for whom there were none of the cognitive dissonances just mentioned, and whose work appears to satisfy both readers and stylisticians as a good match for the Elegie.

Underpinning any interest in assigning authorship is a model of the author. Since the s the traditional scholarly activity of determining authorship has been conducted with a certain unease, resulting from the work of the French post-structuralists Roland Barthes, Michel Foucault, and Jacques Derrida, who, in undermining what they saw as the bourgeois individual subject, displaced the author as the primary source of meaning for texts.

In the literary sphere this individual subject had reached an apogee in the Romantic idea of the heroic individual author. Since structuralism, there has been more interest in the role of discourse, culture, and language itself in creating texts. There are, as Furbank and Owens elaborate, special cautions which should attach to the activity of adding items to a canon. The more items included, the wider the canon, and the easier it is to add further ones 4. There is no such thing as adding a work temporarily to a canon They urge resistance to the pressure to assign authors, since doing this has so many consequences for the canon to which the work is added: after all, "works do not need to be assigned to authors and it does not matter that anonymous works should remain anonymous" It is not enough that a work is a "plausible" addition to a canon, since once added, works are very hard to subtract There is also a more traditional objection to the energy that has gone into attribution.

In Erasmus's words in his edition of St Jerome, "What is so important about whose name is on a book, provided it is a good book? Erasmus's answer is that this may indeed not matter in the case of a mere playwright like Plautus, whose works he has discussed earlier 71 , but is vital in the case of "sacred writers and pillars of the church" like Jerome. If "nonsense" by others is presented as the work of such writers, and the imposture is not detected, then readers are forced to remain silent about any doubts or to accept falsehood Erasmus's attribution methods are discussed in Love : 19— A modern critic might argue that authorship is important even in the case of a dramatist like Plautus.

Attaching a name to a work "changes its meaning by changing its context … certain kinds of meaning are conferred by its membership and position in the book or oeuvre. Kermode suggests that "a different and special form of attention" is paid to the work of a famous writer quoted in Furbank and Owens : Even within the world of scholarship, discovery that an essay was not after all by an important thinker like Foucault would make a considerable difference Love : 96—7.

The signs are that models of authorship are evolving from the post-structuralist "author function", an intersection of discourses, toward a concept more influenced by the workings of cognitive faculties. If the long-term memory store of human beings works not systematically but by "casting out a line for any things directly or indirectly associated with the object of our search", then "the organisation of memories … reflects the person's own past experience and thought rather than a shared resource of cultural knowledge" and this would imply a "unique idiolect" for each individual's speech or writing Lancashire : Then there has been a renewed interest in the linguistics of the individual speaker, whose differences from other speakers, according to Johnstone, should be accorded "foundational status" She shows that, even in highly constraining situations like academic discourse or conducting or answering a telephone survey, speakers tend to create an individual style, and to maintain this style across different discourse types.

Sociolinguistics explains difference through social categories most often class, gender, and race or rhetorical ones purpose and audience , but Johnstone argues that these should be seen as resources from which the individual constructs difference rather than the determinants of it ix-x. She is prepared to envisage a return to the Romantic view of the importance of the individual in language, overturning the highly influential arguments of Saussure that language study should concern itself with langue , the system of a language, rather than parole , the individual instance of language production 20 and challenging the prestige of abstract scientific laws which have meant that, in the words of Edward Sapir, "[t]he laws of syntax acquire a higher reality than the immediate reality of the stammerer who is trying 'to get himself across'" quoted in Johnstone : The practicalities of attribution by stylistic means hinge on the question of the variation in the style of an author.

In Sonnet 76 Shakespeare's speaker laments that he cannot vary his style in line with the poetic fashion, with the result that everything he writes is hopelessly easy to identify as his own, because it is "So far from variation or quick change. In the eighteenth century, Alexander Pope thought that attributing authorship by style was foolish — on the grounds, it seems, that it was too easy for an author to "borrow" a style quoted in Craig : Variation, in other words, was unlimited.

Samuel Johnson later in the same century took the opposite point of view. His friend and biographer James Boswell asked him if everyone had their own style, just as everyone has a unique physiognomy and a unique handwriting. Johnson answered emphatically in the affirmative: "Why, Sir, I think every man whatever has a peculiar style, which may be discovered by nice examination and comparison with others: but a man must write a great deal to make his style obviously discernible" quoted in Love : 7.

The evidence from the vast activity in empirical work on authorship supports a qualified version of Johnson's view. An attribution to satisfy most standards of proof is possible on internal grounds provided the doubtful sample is of sufficient length, and sufficient samples for comparison in similar text types by candidate writers are available. It is reasonable to say that the extreme skeptics about "stylometry" or "non-traditional authorship attribution studies", those who suggested that it had similar claims to useful information about authorship to those of phrenology about personality Love : , have been proved wrong.

Statistics depends on structured variation — on finding patterns in the changes of items along measurable scales. It is easy to see that language samples must vary in all sorts of ways as different messages are composed and in different modes and styles. The claims of statistical authorship attribution rest on the idea that this variation is constrained by the cognitive faculties of the writer. The writer will compose various works in the same or different genres and over a more or less extended career. Language features must vary in frequency within his or her output.

Chronology and genre are readily detectable as sources of change; readers might expect to tell the difference between early and late Henry James, or between the writing in a comic novel and a serious essay by the same writer. Play dialogue presents an expected sharp variation in style within the same work, which may contain old and young speakers, men and women, the rich and the poor, the witty and the dull, and so on. The approach to authorial idiolect through cognitive science and neurology offers its own reinforcement of the notion that different genres are treated differently, and that word-patterns can be acquired and also lost over a lifetime Lancashire : , and : There is, however, evidence which suggests that authorial consistency is quite a strong factor in most eras, so that one can expect to find other sources of systematic variation nested within it.

Early James and late James are different, but not so different as to override the difference between James and Thomas Hardy. The characters created by a single dramatist scatter on many variables, but the scatter may still be constrained enough so that their multidimensional "territory" does not overlap with a second dramatist of the same period writing in the same genres. There is contradictory evidence on this matter from empirical studies. Burrows shows that Henry Fielding's style in his parody of Samuel Richardson remained close enough to his usual pattern to group his parody, Sbamela , with his other writing, though it did move some way toward the style of Richardson himself.

On the other side of the ledger is the case of Remain Gary, who in the s wrote two novels under a pseudonym in an effort to escape the reception which he felt his established reputation influenced too heavily. These novels were successful, and indeed after the publication of the second of them Gary won a second Prix Goncourt under his nom de plume , Emile Ajar.

Tirvengadum reports that in the second of the Ajar novels Gary was able to change his style so radically as to make the profile of his common-word usage distinct from that in his other work. The search for stylistic markers which are outside the conscious control of the writer has led to a divergence between literary interpretation and stylometry, since, as Horton puts it, "the textual features that stand out to a literary scholar usually reflect a writer's conscious stylistic decisions and are thus open to imitation, deliberate or otherwise" quoted in Lancashire : In support of these unconscious style markers are the studies that show that much of language production is done by parts of the brain which act in such swift and complex ways that they can be called a true linguistic unconscious Crane : Lancashire : adds: "This is not to say that we cannot ourselves form a sentence mentally, edit it in memory, and then speak it or write it, just that the process is so arduous, time-consuming, and awkward that we seldom strive to use it.

Naturally it is incumbent on the researchers to show their audiences how markers invisible to writers or readers can be operative in texts at more than a narrowly statistical level, to avoid what Furbank and Owens call "the spectre of the meaningless" As Love argues, a stylistic explanation for grouping or differentiating texts is always to be preferred to a stylometric or "black-box" one Unlike most research in the humanities, results of the purely computational kind cannot be checked against the reader's recollection or fresh study of the texts: in principle, the only check is a replication of the statistical tests themselves.

The first factor to consider in an authorship question is the number of candidates involved. There may be no obvious candidates indicated by external evidence; a group of candidates, from two to a very large but still defined number; or there may be a single candidate with an unlimited group of other possible authors.

Commentators have generally been pessimistic about all but two-author problems. Furbank and Owens , assessing the potential usefulness of quantitative stylistics in testing attributions in the very large group of texts associated with Daniel Defoe, concluded that these new methods "were not yet in a position to replace more traditional approaches" They proceeded to work on external and purely qualitative internal evidence in establishing a revised Defoe canon. The most-cited successful attribution on quantitative measures, the assignation of a group of essays in The Federalist , is an adjudication between two candidates.

The disputed papers could only have been written by Alexander Madison or John Hamilton; the problem was solved by Mosteller and Wallace using function-word markers. There are grounds for optimism, on the other hand, that with advances in technique and in the availability of good-quality, well-suited data, attribution by computational stylistics can make a contribution even in complicated cases.

Burrows , for instance, reports success with a recently developed technique adapted for multiple-candidate problems. It establishes multiple authorial profiles and determines a distance from each of these to the target text, as a measure of which author's work the mystery piece is "least unlike. The way forward would seem to be by a double movement: calibrating the reliability of attribution methods by good technique and sheer collective experience, and on the other hand, with the increasing availability of text for comparison, advancing on even the most difficult problems especially those involving a single author as against an unlimited group of others, and searching for good candidates from a very large pool.

It may be misguided to aim for complete certainty in attaching a single name to a doubtful text. There is, after all, much to be gained from eliminating one or more candidates, while remaining uncertain about the actual author, or narrowing authorship to a small group of names. It may be useful to conclude with a brief list of commonly encountered pitfalls in authorial attribution by computational stylistics:.

This is obvious when one authorial group is all tragedies and the second comedies. Even if all texts are from the same genre, there must always be the question whether the difference is really between say comedies set in the city and those set in the court. This was the downfall of what Schoenbaum calls the "parallelographic school" Furbank and Owens : put it this way: "any liberty to choose one test, rather than another for a given authorship problem will lay you open to incalculable, perhaps subliminal, temptations to choose the one that will help you to prove what you want to prove.

More positively, one might consider a set of ideal conditions for an attribution on stylistic grounds:. In the experiment in assigning Restoration poems to their authors described in Burrows , reliability increases with sample length. Purists would say that all these conditions must be met before a secure attribution can be made, but it may be more realistic to see a continuum of reliability to which all these conditions contribute.

Attribution work may still yield valuable results, in eliminating some of the candidates, for example, where there are significant deficits in one or more of these conditions. Computer-assisted stylistics and authorship studies are at an interesting stage. A remarkable range of studies has been built up; methods have been well calibrated by a variety of researchers on a variety of problems; and even if some studies have proved faulty, the vigorous discussion of their shortcomings is a resource for those who follow. Well-tested methods can now provide multiple approaches to a given problem, so that results can be triangulated and cross-checked.

There is an ever-increasing volume of text available in machine-readable form: this means that a large purpose-built corpus can now be assembled quite quickly in many areas. There are enough successes to suggest that computational stylistics and non-traditional attribution have become essential tools, the first places one looks to for answers on very large questions of text patterning, and on difficult authorship problems. It is worth noting, too, that the lively debate provoked by computational work in stylistics and authorship is an indication that these activities are playing a significant part in a much wider contemporary discussion about the relations of the human and the mechanical.

Biber, D.

The Singularity Is Near

Variation across Speech and Writing. Cambridge: Cambridge University Press. Burrows, J. Oxford: Clarendon Press. Scriblerian and the Kit-Kats — Literary and Linguistic Computing — Love Eighteenth Century Life 18— Craig, D. Crane, M. Shakespeare's Brain: Reading with Cognitive Theory.

A Statistical Method for Determining Authorship. Gothenburg: University of Gothenburg. Erasmus, Desiderius Collected Works of Erasmus. Brady and John C. Toronto: University of Toronto Press. Erdman, David V. Fogel, eds. Evidence for Authorship: Essays on Problems of Attribution. Fish, S. Is there a Text in this Class? The Authority of Interpretive Communities. Foster, Donald W Elegy by W. Furbank, P. Owens The Canonisation of Daniel Defoe.

Dangerous Relations. Scriblerian and the Kit-Kats —4. Jackson, MacD. Determining Authorship: A New Technique. Research Opportunities in Renaissance Drama 1— Johnstone, B. New York: Oxford University Press. Lancashire, I. Empirically Determining Shakespeare's Idiolect. Shakespeare Studies — Paradigms of Authorship. Probing Shakespeare's Idiolect in Troilus and Cressida , 1. University of Toronto Quarterly — Love, Harold Attributing Authorship: An Introduction. McMenamin, G. Forensic Stylistics. Amsterdam: Elsevier.

Milic, Louis T. Progress in Stylistics: Theory, Statistics, Computers. Computers and the Humanities — Mosteller, F. Wallace Inference and Disputed Authorship: The Federalist. Reading, MA: Addison-Wesley. Schoenbaum, S. London: Arnold. Tirvengadum, Vina Linguistic Fingerprints and Literary Fraud.

Computing in the Humanities Working Papers. Accessed November 1, The corpus is a fundamental tool for any type of research on language. The availability of computers in the s immediately led to the creation of corpora in electronic form that could be searched automatically for a variety of language features, and compute frequency, distributional characteristics, and other descriptive statistics.

Corpora of literary works were compiled to enable stylistic analyses and authorship studies, and corpora representing general language use became widely used in the field of lexicography. In this era, the creation of an electronic corpus required entering the material by hand, and the storage capacity and speed of computers available at the time put limits on how much data could realistically be analyzed at any one time.

Without the Internet to foster data sharing, corpora were typically created, and processed at a single location. For several years, the Brown and LOB were the only widely available computer-readable corpora of general language, and therefore provided the data for numerous language studies. In the s, the speed and capacity of computers increased dramatically, and, with more and more texts being produced in computerized form, it became possible to create corpora much larger than the Brown and LOB, containing millions of words.

The availability of language samples of this magnitude opened up the possibility of gathering meaningful statistics about language patterns that could be used to drive language processing software such as syntactic parsers, which sparked renewed interest in corpus compilation within the computational linguistics community. Parallel corpora, which contain the same text in two or more languages, also began to appear; the best known of these is the Canadian Hansard corpus of Parliamentary debates in English and French.

Corpus creation still involved considerable work, even when texts could be acquired from other sources in electronic form. For example, many texts existed as typesetter's tapes obtained from publishers, and substantial processing was required to remove or translate typesetter codes. The "golden era" of linguistic corpora began in and continues to this day. Enormous corpora of both text and speech have been, and continue to be, compiled, many by government-funded projects in Europe, the USA, and Japan. In addition to monolingual corpora, several multilingual parallel corpora covering multiple languages have also been created.

A side effect of the growth in the availability and use of corpora in the s was the development of automatic techniques for annotating language data with information about its linguistic properties. Algorithms for assigning part of speech tags to words in a corpus and aligning words and sentences in parallel text i. Automatic means to identify syntactic configurations such as noun phrases, and proper names, dates, etc. However, because of the cost and difficulty of obtaining some types of texts e.

In fact the greatest number of existing text corpora are composed of readily available materials such as newspaper data, technical manuals, government documents, and, more recently, materials drawn from the World Wide Web. Speech data, whose acquisition is in most instances necessarily controlled, are more often representative of a specific dialect or range of dialects.

Many corpora are available for research purposes by signing a license and paying a small reproduction fee. Other corpora are available only by paying a sometimes substantial fee; this is the case, for instance, for many of the holdings of the LDC, making them virtually inaccessible to humanists. The first phase of corpus creation is data capture , which involves rendering the text in electronic form, either by hand or via optical character recognition OCR , acquisition of word processor or publishing software output, typesetter tapes, PDF files, etc.

Manual entry is time-consuming and costly, and therefore unsuitable for the creation of very large corpora. OCR output can be similarly costly if it requires substantial post-processing to validate the data. Data acquired in electronic form from other sources will almost invariably contain formatting codes and other information that must be discarded or translated to a representation that is processable for linguistic analysis.

At this time, the most common representation format for linguistic corpora is XML. The XCES introduced the notion of stand-off annotation , which requires that annotations are encoded in documents separate from the primary data and linked to them. One of the primary motivations for this approach is to avoid the difficulties of overlapping hierarchies, which are common when annotating diverse linguistic features, as well as the unwieldy documents that can be produced when multiple annotations are associated with a single document.

The stand-off approach also allows for annotation of the same feature e. Finally, it supports two basic notions about text and annotations outlined in Leech : it should be possible to remove the annotation from an annotated corpus in order to revert to the raw corpus; and, conversely, it should be possible to extract the annotations by themselves from the text.

The use of stand-off annotation is now widely accepted as the norm among corpus and corpus-handling software developers; however, because mechanisms for inter-document linking have only recently been developed within the XML framework, many existing corpora include annotations in the same document as the text. The use of the stand-off model dictates that a distinction is made between the primary data i. The XCES identifies two types of information that may be encoded in the primary data:.

Speech data, especially speech signals, are often treated as "read-only", and therefore the primary data contain no XML markup to which annotations may be linked. In this case, stand-off documents identify the start and end points typically using byte offsets of the structures listed above, and annotations are linked indirectly to the primary data by referencing the structures in these documents.

The annotation graphs representation format used in the ATLAS project, which is intended primarily to handle speech data, relies entirely on this approach to link annotations to data, with no option for referencing XML-tagged elements. Markup identifying the boundaries of gross structures may be automatically generated from original formatting information. However, in most cases the original formatting is presentational rather than descriptive ; for example, titles may be identifiable because they are in bold, and therefore transduction to a descriptive XML representation may not be straightforward.

This is especially true for sub-paragraph elements that are in italic or bold font; it is usually impossible to automatically tag such elements as emphasis, foreign word, etc. Bostrom suggests that all humans could become wealthy from AIs. But he doesn't notice that more than half of the world's wealth and resources is now owned by one percent of its people, and it's heading ever more in the favour of the one percent, because they have the wealth to ensure that it does.

They rent the land, they own the debt, they own the manufacturing and the resource mines. Homeowners could be devastated by sea rise and climate change, not looked at, but the super-wealthy can just move to another of their homes. Again, I found in a later chapter lines like: "For example, suppose that we want to start with some well-motivated human-like agents - let us say emulations. We want to boost the cognitive capacities of these agents, but we worry that the enhancements might corrupt their motivations. One way to deal with this challenge would be to set up a system in which individual emulations function as subagents.

When a new enhancement is introduced, it is first applied to a small subset of the subagents. Its effects are then studied by a review panel composed of subagents who have not yet had the enhancement applied to them. I have to think that the author, Director of the Future of Humanity Institute and Professor of the Faculty of Philosophy at Oxford, is so used to writing for engineers or philosophers that he loses out on what really helps the average interested reader.

For this reason I'm giving Superintelligence four stars, but someone working in this AI industry may of course feel it deserves five stars. If so, I'm not going to argue with her. In fact I'm going to be very polite. May 29, Rod Van Meter rated it really liked it.

How to Create a Mind PDF Summary - Ray Kurzweil | 12min Blog

Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips and paper clip factories by a well-intentioned but misguided artificial intelligence AI that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? It doesn't require Skynet and Terminato Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips and paper clip factories by a well-intentioned but misguided artificial intelligence AI that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal?


  • The Conquest of New Spain.
  • A Companion to Digital Humanities "ss"?
  • Devils Charge: Book 2 of The Civil War Chronicles.
  • Edible Fondant Creations: Forest Animal Cake Toppers!

It doesn't require Skynet and Terminators, it doesn't require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity's welfare is irrelevant or defined very differently than most humans today would define it. This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me.

From Wikipedia, the free encyclopedia

I've also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom's book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can't yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea.

We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in We also need to be prepared for the possibility that such a moratorium doesn't hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we'll get to below. Bostrom skirts the issue of whether it will be conscious, or "have qualia", as I think the philosophers of mind say.

Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term "the Singularity. I read one of Kurzweil's books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe's purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.

I'm largely allergic to that kind of hooey. I really don't see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I also don't see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can't continue forever, as Danny Hillis is fond of pointing out.

So perhaps my opinion is somewhat biased by a dislike of Kurzweil's circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way: Being smart is hard. And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance.

This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from "too fast for us to notice" through "long enough for us to develop international agreements and monitoring institutions," but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore.

There are parts of his argument I find convincing, and parts I find less so. To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI or a pet family genius to generate when given a problem. Off the top of my head, I can think of six: [Speed] Same quality of answer, just faster.

The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences. So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are "better" in some qualitative sense.

Change Password

Humans are already pretty good at projecting the trajectory of a baseball, but it's certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension. But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket.

Someone "smarter" might be able to make some interesting statistical predictions that wouldn't occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong. In chess, go, or shogi, a x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before.

Less if your pruning discarding unpromising paths is poor, more if it's good. Don't get me wrong -- that's a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited. Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone three-move handicap, four if his life depended on it.

Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up. In the most recent human-versus-computer shogi Japanese chess series, humans came out on top, though presumably this won't last much longer.

In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them. Simply being able to hold more data in your head or the AI's head while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this.

Again, however, the AI's capabilities are unlikely to recede into the distance as something we can't comprehend. We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then as we currently understand it you can only resort to repeated simulations and statistical measures.

The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn't complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.

But those are just the warmup. Those are things we already ask computers to do for us, even though they are "dumber" than we are. What about the latter three categories? I'm no expert in creativity, and I know researchers study it intensively, so I'm going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.

For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. No implication that artists don't have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here. Einstein's insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses possibly unconsciously and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones.

In either case, he also had the mathematical chops to prove or at least analyze effectively his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one. So, will someone smarter be able to do this much better? Well, it's really clear that Einstein or Feynman or Hawking, if your choice of favorite scientist leans that way produced and validated hypotheses that the rest of us never could have.

A hundred? A million? My guess is it's closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to. Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data.

As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine -- thousands, if you reach back into the supporting materials, combustion and fluid flow research.

We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don't believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort -- or new techniques -- with each passing generation. The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter.

Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension? Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day?

This is what Bostrom calls the "recalcitrance" of the problem. I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won't dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly.

Don't take these numbers seriously, it's just an example. Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence -- the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.

What about "values", my sixth type of answer, above? Ah, there's where it all goes awry. Chapter eight is titled, "Is the default scenario doom? What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it's smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it.

Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips. I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn't hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence.

Bostrom clearly roots for humanity here. Which means it's incumbent on us to find a way to prevent this from happening. Bostrom thinks that instilling values that are actually close enough to ours that an AI will "see things our way" is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of "maximizing human happiness," does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet's carrying capacity is higher for digital than organic beings?

As long as we're talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book. He uses a variety of names for different strategies for containing AIs, including "genies" and "oracles".

Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI's ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture.

It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act. I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that's fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has.

The same will be true with carefully boxed AIs. At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological. If we can't contain them, what options do we have? After arguing earlier that we can't give AIs our own values and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings , he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.

At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.

Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a "singleton", a single, most powerful AI, is the nearly inevitable outcome.

I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I'm not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered. The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time.

Read the first 8 chapters of this book! Jun 24, Brendan Monroe rated it it was ok Shelves: shoulda-coulda , too-many-good-books-to-read-to-wast , science , we-re-all-doomed , non-fiction , scary-future , side-effects-include-extreme-boredo. Reading this was like trying to wade through a pool of thick, gooey muck. Did I say pool? I meant ocean. And if you don't keep moving you're going to get pulled under by Bostrom's complex mathematical formulas and labored writing and slowly suffocate. It shouldn't have been this way.

I went into it eagerly enough, having read a little recently about AI. It is a fascinating subject, after all. Wanting to know more, I picked up "Superintelligence".

Navigation menu

I could say my relationship with this book was ak Reading this was like trying to wade through a pool of thick, gooey muck. I could say my relationship with this book was akin to the one Michael Douglas had with Glenn Close in "Fatal Attraction" but there was actually some hot sex in that film before all the crazy shit started happening. The only thing hot about this book is how parched the writing is. To say that this reads more like a textbook wouldn't be right either as I have read some textbooks that were absolute nail biters by comparison. Yes, I'm giving this 2 stars but perhaps that's my own insecurity at refusing to let a 1-star piece of shit beat me.

This isn't an all-out bad book, it's just a book by someone who has something interesting to say but no idea of how to say it — at least, not to human beings. You know things aren't looking good when the author says in his introduction that he failed in what he set out to do — namely, write a readable book. Maybe save that for the afterword? But it didn't matter that I was warned. I slogged through the fog for pages or so, finally throwing the towel in about a quarter of the way in.

I never thought someone could make artificial intelligence sound boring but Nick Bostrom certainly has. The only part of the thing I liked at all was the nice little parable at the beginning about the owl. That lasted only a couple pages and you could tell Bostrom didn't write it because it was: 1.

Understandable 2. Interesting If you're doing penance for some sin, forcing this down ought to cover a murder or two. Here you are, O. Justice has finally been served. To everyone else wanting to read this one, you really don't hate yourselves that much. Jun 17, Gavin rated it really liked it Shelves: insight-full , the-long-term. Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. It is more rigorous about a topic which doesn't exist than you would think possible.

I didn't find it hard to read, but I have been marinating in tech rationalism for a few years and have absorbed much of Bostrom secondhand so YMMV. I loved this: Many of the points made in this book are probably wro Like a lot of great philosophy, Superintelligence acts as a space elevator: you make many small, reasonable, careful movements - and you suddenly find yourself in outer space, home comforts far below. I loved this: Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.

Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse - including the default view, according to which we can for the time being reasonably ignore the prospect of superintelligence.

Bostrom introduces dozens of neologisms and many arguments. Here is the main scary apriori one though: 1. Just being intelligent doesn't imply being benign; intelligence and goals can be independent. Any agent which seeks resources and lacks explicit moral programming would default to dangerous behaviour.

You are made of things it can use; hate is superfluous. Instrumental convergence. It is conceivable that AIs might gain capability very rapidly through recursive self-improvement. Non-negligible possibility of a hard takeoff. Of far broader interest than its title and that argument might suggest to you. In particular, it is the best introduction I've seen to the new, shining decision sciences - an undervalued reinterpretation of old, vague ideas which, until recently, you only got to see if you read statistics, and economics, and the crunchier side of psychology.

It is also a history of humanity, a thoughtful treatment of psychometrics v genetics, and a rare objective estimate of the worth of large organisations, past and future. Superintelligence 's main purpose is moral: he wants us to worry and act urgently about hypotheticals; given this rhetorical burden, his tone too is a triumph. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult.

Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens. Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the firmament. Nor is there a grown-up in sight This is not a prescription of fanaticism. The intelligence explosion might still be many decades off in the future. Moreover, the challenge we face is, in part, to hold on to our humanity: to maintain our groundedness, common sense, and goodhumored decency even in the teeth of this most unnatural and inhuman problem.

We need to bring all human resourcefulness to bear on its solution. I don't donate to AI safety orgs, despite caring about the best way to improve the world and despite having no argument against it better than "that's not how software has worked so far" and despite the concern of smart experts. This sober, kindly book made me realise this was more to do with fear of sneering than noble scepticism or empathy. Robin Hanson chokes eloquently here and for god's sake let's hope he's right. Feb 12, Tammam Aloudat rated it really liked it Shelves: non-fiction. This is at the same time a difficult to read and horrifying book.

The progress that we may or will see from "dumb" machines into super-intelligent entities can be daunting to take in and absorb and the consequences can range from the extinction of human life all the way to a comfortable and effortlessly meaningful one. The first issue with the book is the complexity. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding th This is at the same time a difficult to read and horrifying book. It is not only the complexity of the scientific concepts included, one can read the book without necessarily fully understanding the nuances of science included.

It is the complexity of language and referencing to a multitude of legal, philosophical, and scientific concepts outside the direct domain of the book from "Malthusian society" to "Rawlsian veil of ignorance" as if assuming that the lay reader should, by definition, fully grasp the reference. This, I find, has a lot of pretension on the side of the author. However, the book is a valuable analysis of the history, presence, and possible futures of developing artificial and machine intelligence that is diverse and well though of.

The author is critical and comprehensive and knows his stuff well. I found it made me think of things I haven't considered before and provided me with some frameworks to understand how one can position oneself when confronted with the possibilities of intelligent or super intelligent machines.

Another one is purely technical. I have learned a lot about the possibilities of artificial intelligence that apparently is not only a programmed supercomputer but AIs that are adjusted copies of human brains, ones that do not require the maker to understand the intelligence of the machine they are creating. The book also talks in details about some fascinating topics.

In a situation where, intelligence wise, a machine is to a human like a human is to a mouse, we cannot even understand the ways a super-intelligent machine can out-think us and we, for all intents and purposes, cannot make sure that such machine is not going to override any safety features we put in place to contain it. We also cannot understand the many ways the AI can be motivated and towards what ends and how any miscalculation on our side in making it can lead to grave consequences. The good news, in a way, is that we are still some time away or so it seems from a super-intelligent AI.

The one thing I missed more than anything in this book, to go back to the readability issue, is a little reference that hinges the concepts we read about in concepts we understand. After all, on the topic of AI, we have a wealth of pop-culture references that will help us understand what the author is talking about that he did not as much as hint at.

I was somewhat expecting that he would link the concepts he was talking about to science fiction known to us all. There is an art to linking science with culture that Mr. Bostrom has little grasp on in his somber and barely readable style. This book could have been much more fun and much easier to read. Jul 30, Radiantflux rated it really liked it Shelves: audiobook , technology. In brilliant fashion Bostrom systematically examines how a super-intelligence arise over the coming decades, and what humanity might do to avoid disaster.

Bottom-line: Not much. Sep 22, Bill rated it liked it Shelves: mind , ai. An extraordinary achievement: Nick Bostrom takes a topic as intrinsically gripping as the end of human history if not the world and manages to make it stultifyingly boring. Nov 07, Meghan rated it liked it Shelves: , science-tech. More detail than I needed on the subject, but I might rue that statement when the android armies are swarming Manhattan. Dec 14, Blake Crouch rated it it was amazing. The most terrifying book I've ever read. Dense, but brilliant. Nov 16, Richard Ash rated it really liked it Shelves: computers.

A few thoughts: 1. Very difficult topic to write about. There's so much uncertainty involved that it's almost impossible to even agree on the basic assumptions of the book. The writing is incredibly thorough, given the assumptions, but also hard to understand. You need to follow the arguments closely and reread sections to fully understand their implications. Overall, interesting and thought-provoking book even though the basic assumptions are debatable P. In this story, Alice is charged with collecting as many paperclips as she can. She goes to achieves this goal by transforming the entire universe into a paperclip factory, and in the process destroying all life in the universe.

Now the main lesson is that what we consider human values won't spontaneously arise in machines. And as shown in the story of Alice this could be dangerous for humans. Nick visits this theme again and again throughout his book. We need to be very careful and teach machines human values and not assume that these values will arise automatically. Jul 20, Ivan Petrovic rated it really liked it. Okay, so this was most complex book I've ever read so far. Nick did a good job paraphrasing the subject of AI, mainly about where have we gotten so far, what is feasible, and what to look out for in the future.

Yeah, it has a lot of assumptions and things that might not happen, but you literally start thinking in so many different ways about a certain possibilities that have never occured to you earlier. Oh Okay, so this was most complex book I've ever read so far. Oh, and "tables", "figures" and "boxes" were also very interesting to read.