Silicon Valley Is Winning the Race to Build the First Driverless Cars - Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, effi...
19 hours ago
"In fact, the latest results from ATLAS, one of the two giant detectors at the LHC, show that there's no evidence for many of the particles all the way up to 700 GeV, or over 100,000 times the mass of the electron and the two lightest quarks."
"But the deeper lesson of this whole exercise is that — to a degree I didn't appreciate until Kevin forced me to look — technology does indeed persist. Tools, machines, they change, they adapt, they morph, but they continue to be made. I hadn't noticed this tenaciousness before."
I show that physical devices that perform observation, prediction, or recollection share an underlying mathematical structure. I call devices with that structure "inference devices". I present a set of existence and impossibility results concerning inference devices. These results hold independent of the precise physical laws governing our universe. In a limited sense, the impossibility results establish that Laplace was wrong to claim that even in a classical, non-chaotic universe the future can be unerringly predicted, given sufficient knowledge of the present. Alternatively, these impossibility results can be viewed as a non-quantum mechanical "uncertainty principle". Next I explore the close connections between the mathematics of inference devices and of Turing Machines. In particular, the impossibility results for inference devices are similar to the Halting theorem for TM's. Furthermore, one can define an analog of Universal TM's (UTM's) for inference devices. I call those analogs "strong inference devices". I use strong inference devices to define the "inference complexity" of an inference task, which is the analog of the Kolmogorov complexity of computing a string. However no universe can contain more than one strong inference device. So whereas the Kolmogorov complexity of a string is arbitrary up to specification of the UTM, there is no such arbitrariness in the inference complexity of an inference task. I end by discussing the philosophical implications of these results, e.g., for whether the universe "is" a computer.
Physics might be defined as the subject that tries to figure out why the world may look incomprehensibly complex at first, but on closer examination is governed by simple laws. Those laws, applied repeatedly, build up the complexity. From this definition, you'd presume that physicists have at least sorted out what they mean by "law".
IN THE RACE TO BUILD COMPUTERS THAT CAN THINK LIKE HUMANS, THE PROVING GROUND IS THE TURING TEST—AN ANNUAL BATTLE BETWEEN THE WORLD’S MOST ADVANCED ARTIFICIAL-INTELLIGENCE PROGRAMS AND ORDINARY PEOPLE. THE OBJECTIVE? TO FIND OUT WHETHER A COMPUTER CAN ACT “MORE HUMAN” THAN A PERSON. IN HIS OWN QUEST TO BEAT THE MACHINES, THE AUTHOR DISCOVERS THAT THE MARCH OF TECHNOLOGY ISN’T JUST CHANGING HOW WE LIVE, IT’S RAISING NEW QUESTIONS ABOUT WHAT IT MEANS TO BE HUMAN.
This essay hopes to persuade its readers that science ought to take the notion of deism a lot more seriously. The rise of the artilect in this century makes the notion of a hyperintelligent designer and creator of our universe far more plausible. It suggests the creation of a “hyper-physics” (as distinct from a traditional metaphysics that poses the deepest of questions) that would “investigate” the tree of universes that a branching set of artilects may have created.
Motivated by recent developments impacting our view of Fermi’s Paradox (the absence of extraterrestrials and their manifestations from our past light cone), we suggest a reassessment of the problem itself, as well as of strategies employed by the various SETI projects so far. The need for such reassessment is fueled not only by the failure of SETI thus far, but also by great advances recently made in astrophysics, astrobiology, computer science and future studies. As a result, we consider the effects of the observed metallicity and temperature gradients in the Milky Way galaxy on the spatial distribution of hypothetical advanced extraterrestrial intelligent communities. While properties of such communities and their sociological and technological preferences are, obviously, unknown at present, we assume that (1) they operate in agreement with the known laws of physics and (2) at some point in their history they typically become motivated by a meta-principle embodying the central role of information-processing; a prototype of the latter is the recently suggested Intelligence Principle of Steven J. Dick. There are specific conclusions of practical interest to astrobiological and SETI endeavors to be drawn from the coupling of these reasonable assumptions with the astrophysical and astrochemical structure of the spiral disk of our galaxy. In particular, we suggest that the outer regions of the Galactic disk are the most likely locations for advanced SETI targets, and that sophisticated intelligent communities will tend to migrate outward through the Galaxy as their capacities of information-processing increase, for both thermodynamical and astrochemical reasons. However, the outward movement is limited by the decrease in matter density in the outer Milky Way. This can also be regarded as a possible generalization of the galactic habitable zone (GHZ), concept currently being investigated in astrobiology.
Keywords: Astrobiology; Galaxy: evolution; Extraterrestrial intelligence; Physics of computation; SETI