Feed aggregator

  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.
  • warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in /home1/kuehlebo/public_html/transhumanisme/modules/aggregator/aggregator.pages.inc on line 259.

Decisions are for making bad outcomes inconsistent

The Singularity Institute Blog - Sat, 04/08/2017 - 00:02

Nate Soares’ recent decision theory paper with Ben Levinstein, “Cheating Death in Damascus,” prompted some valuable questions and comments from an acquaintance (anonymized here). I’ve put together edited excerpts from the commenter’s email below, with Nate’s responses.

The discussion concerns functional decision theory (FDT), a newly proposed alternative to causal decision theory (CDT) and evidential decision theory (EDT). Where EDT says “choose the most auspicious action” and CDT says “choose the action that has the best effects,” FDT says “choose the output of one’s decision algorithm that has the best effects across all instances of that algorithm.”

FDT usually behaves similarly to CDT. In a one-shot prisoner’s dilemma between two agents who know they are following FDT, however, FDT parts ways with CDT and prescribes cooperation, on the grounds that each agent runs the same decision-making procedure, and that therefore each agent is effectively choosing for both agents at once.1

Below, Nate provides some of his own perspective on why FDT generally achieves higher utility than CDT and EDT. Some of the stances he sketches out here are stronger than the assumptions needed to justify FDT, but should shed some light on why researchers at MIRI think FDT can help resolve a number of longstanding puzzles in the foundations of rational action.

 

Anonymous: This is great stuff! I’m behind on reading loads of papers and books for my research, but this came across my path and hooked me, which speaks highly of how interesting is the content and the sense that this paper is making progress.

My general take is that you are right that these kinds of problems need to be specified in more detail. However, my guess is that once you do so, game theorists would get the right answer. Perhaps that’s what FDT is: it’s an approach to clarifying ambiguous games that leads to a formalism where people like Pearl and myself can use our standard approaches to get the right answer.

I know there’s a lot of inertia in the “decision theory” language, so probably it doesn’t make sense to change. But if there were no such sunk costs, I would recommend a different framing. It’s not that people’s decision theories are wrong; it’s that they are unable to correctly formalize problems in which there are high-performance predictors. You show how to do that, using the idea of intervening on (i.e., choosing between putative outputs of) the algorithm, rather than intervening on actions. Everything else follows from a sufficiently precise and non-contradictory statement of the decision problem.

Probably the easiest move this line of work could make to ease this knee-jerk response of mine in defense of mainstream Bayesian game theory is to just be clear that CDT is not meant to capture mainstream Bayesian game theory. Rather, it is a model of one response to a class of problems not normally considered and for which existing approaches are ambiguous.

Nate Soares: I don’t take this view myself. My view is more like: When you add accurate predictors to the Rube Goldberg machine that is the universe — which can in fact be done — the future of that universe can be determined by the behavior of the algorithm being predicted. The algorithm that we put in the “thing-being-predicted” slot can do significantly better if its reasoning on the subject of which actions to output respects the universe’s downstream causal structure (which is something CDT and FDT do, but which EDT neglects), and it can do better again if its reasoning also respects the world’s global logical structure (which is done by FDT alone).

We don’t know exactly how to respect this wider class of dependencies in general yet, but we do know how to do it in many simple cases. While it agrees with modern decision theory and game theory in many simple situations, its prescriptions do seem to differ in non-trivial applications.

The main case where we can easily see that FDT is not just a better tool for formalizing game theorists’ traditional intuitions is in prisoner’s dilemmas. Game theory is pretty adamant about the fact that it’s rational to defect in a one-shot PD, whereas two FDT agents facing off in a one-shot PD will cooperate.

In particular, classical game theory employs a “common knowledge of shared rationality” assumption which, when you look closely at it, cashes out more or less as “common knowledge that all parties are using CDT and this axiom.” Game theory where common knowledge of shared rationality is defined to mean “common knowledge that all parties are using FDT and this axiom” gives substantially different results, such as cooperation in one-shot PDs.

A causal graph of Death in Damascus for CDT agents.2

Anonymous: When I’ve read MIRI work on CDT in the past, it seemed to me to describe what standard game theorists mean by rationality. But at least in cases like Murder Lesion, I don’t think it’s fair to say that standard game theorists would prescribe CDT. It might be better to say that standard game theory doesn’t consider these kinds of settings, and there are multiple ways of responding to them, CDT being one.

But I also suspect that many of these perfect prediction problems are internally inconsistent, and so it’s irrelevant what CDT would prescribe, since the problem cannot arise. That is, it’s not reasonable to say game theorists would recommend such-and-such in a certain problem, when the problem postulates that the actor always has incorrect expectations; “all agents have correct expectations” is a core property of most game-theoretic problems.

The Death in Damascus problem for CDT agents is a good example of this. In this problem, either Death will not find the CDT agent with certainty, or the CDT agent will never have correct beliefs about her own actions, or she will be unable to best respond to her own beliefs.

So the problem statement (“Death finds the agent with certainty”) rules out typical assumptions of a rational actor: that it has rational expectations (including about its own behavior), and that it can choose the preferred action in response to its beliefs. The agent can only have correct beliefs if she believes that she has such-and-such belief about which city she’ll end up in, but doesn’t select the action that is the best response to that belief.

Nate: I contest that last claim. The trouble is in the phrase “best response”, where you’re using CDT’s notion of what counts as the best response. According to FDT’s notion of “best response”, the best response to your beliefs in the Death in Damascus problem is to stay in Damascus, if we’re assuming it costs nonzero utility to make the trek to Aleppo.

In order to define what the best response to a problem is, we normally invoke a notion of counterfactuals — what are your available responses, and what do you think follows from them? But the question of how to set up those counterfactuals is the very point under contention.

So I’ll grant that if you define “best response” in terms of CDT’s counterfactuals, then Death in Damascus rules out the typical assumptions of a rational actor. If you use FDT’s counterfactuals (i.e., counterfactuals that respect the full range of subjunctive dependencies), however, then you get to keep all the usual assumptions of rational actors. We can say that FDT has the pre-theoretic advantage over CDT that it allows agents to exhibit sensible-seeming properties like these in a wider array of problems.

Anonymous: The presentation of the Death in Damascus problem for CDT feels weird to me. CDT might also just turn up an error, since one of its assumptions is violated by the problem. Or it might cycle through beliefs forever… The expected utility calculation here seems to give some credence to the possibility of dodging death, which is assumed to be impossible, so it doesn’t seem to me to correctly reason in a CDT way about where death will be.

For some reason I want to defend the CDT agent, and say that it’s not fair to say they wouldn’t realize that their strategy produces a contradiction (given the assumptions of rational belief and agency) in this problem.

Nate: There are a few different things to note here. First is that my inclination is always to evaluate CDT as an algorithm: if you built a machine that follows the CDT equation to the very letter, what would it do?

The answer here, as you’ve rightly noted above, is that the CDT equation isn’t necessarily defined when the input is a problem like Death in Damascus, and I agree that simple definitions of CDT yield algorithms that would either enter an infinite loop or crash. The third alternative is that the agent notices the difficulty and engages in some sort of reflective-equilibrium-finding procedure; variants of CDT with this sort of patch were invented more or less independently by Joyce and Arntzenius to do exactly that. In the paper, we discuss the variants that run an equilibrium-finding procedure and show that the equilibrium is still unsatisfactory; but we probably should have been more explicit about the fact that vanilla CDT either crashes or loops.

Second, I acknowledge that there’s still a strong intuition that an agent should in some sense be able to reflect on their own instability, look at the problem statement, and say, “Aha, I see what’s going on here; Death will find me no matter what I choose; I’d better find some other way to make the decision.” However, this sort of response is explicitly ruled out by the CDT equation: CDT says you must evaluate your actions as if they were subjunctively independent of everything that doesn’t causally depend on them.

In other words, you’re correct that CDT agents know intellectually that they cannot escape Death, but the CDT equation requires agents to imagine that they can, and to act on this basis.

And, to be clear, it is not a strike against an algorithm for it to prescribe making actions by reasoning about impossible scenarios — any deterministic algorithm attempting to reason about what it “should do” must imagine some impossibilities, because a deterministic algorithm has to reason about the consequences of doing lots of different things, but is in fact only going to do one thing.

The question at hand is which impossibilities are the right ones to imagine, and the claim is that in scenarios with accurate predictors, CDT prescribes imagining the wrong impossibilities, including impossibilities where it escapes Death.

Our human intuitions say that we should reflect on the problem statement and eventually realize that escaping Death is in some sense “too impossible to consider”. But this directly contradicts the advice of CDT. Following this intuition requires us to make our beliefs obey a logical-but-not-causal constraint in the problem statement (“Death is a perfect predictor”), which FDT agents can do but CDT agents can’t. On close examination, the “shouldn’t CDT realize this is wrong?” intuition turns out to be an argument for FDT in another guise. (Indeed, pursuing this intuition is part of how FDT’s predecessors were discovered!)

Third, I’ll note it’s an important virtue in general for decision theories to be able to reason correctly in the face of apparent inconsistency. Consider the following simple example:

An agent has a choice between taking $1 or taking $100. There is an extraordinarily tiny but nonzero probability that a cosmic ray will spontaneously strike the agent’s brain in such a way that they will be caused to do the opposite of whichever action they would normally do. If they learn that they have been struck by a cosmic ray, then they will also need to visit the emergency room to ensure there’s no lasting brain damage, at a cost of $1000. Furthermore, the agent knows that they take the $100 if and only if they are hit by the cosmic ray.

When faced with this problem, EDT agents reason: “If I take the $100, then I must have been hit by the cosmic ray, which means that I lose $900 on net. I therefore prefer the $1.” They then take the $1 (except in cases where they have been hit by the cosmic ray).

Since this is just what the problem statement says — “the agent knows that they take the $100 if and only if they are hit by the cosmic ray” — the problem is perfectly consistent, as is EDT’s response to the problem. EDT only cares about correlation, not dependency; so EDT agents are perfectly happy to buy into self-fulfilling prophecies, even when it means turning their backs on large sums of money.

What happens when we try to pull this trick on a CDT agent? She says, “Like hell I only take the $100 if I’m hit by the cosmic ray!” and grabs the $100 — thus revealing your problem statement to be inconsistent if the agent runs CDT as opposed to EDT.

The claim that “the agent knows that they take the $100 if and only if they are hit by the cosmic ray” contradicts the definition of CDT, which requires that agents of CDT refuse to leave free money on the table. As you may verify, FDT also renders the problem statement inconsistent, for similar reasons. The definition of EDT, on the other hand, is fully consistent with the problem as stated.

This means that if you try to put EDT into the above situation — controlling its behavior by telling it specific facts about itself — you will succeed; whereas if you try to put CDT into the above situation, you will fail, and the supposed facts will be revealed as lies. Whether or not the above problem statement is consistent depends on the algorithm that the agent runs, and the design of the algorithm controls the degree to which you can put that algorithm in bad situations.

We can think of this as a case of FDT and CDT succeeding in making a low-utility universe impossible, where EDT fails to make a low-utility universe impossible. The whole point of implementing a decision theory on a piece of hardware and running it is to make bad futures-of-our-universe impossible (or at least very unlikely). It’s a feature of a decision theory, and not a bug, for there to be some problems where one tries to describe a low-utility state of affairs and the decision theory says, “I’m sorry, but if you run me in that problem, your problem will be revealed as inconsistent”.3

This doesn’t contradict anything you’ve said; I say it only to highlight how little we can conclude from noticing that an agent is reasoning about an inconsistent state of affairs. Reasoning about impossibilities is the mechanism by which decision theories produce actions that force the outcome to be desirable, so we can’t conclude that an agent has been placed in an unfair situation from the fact that the agent is forced to reason about an impossibility.

A causal graph of the XOR blackmail problem for CDT agents.4

Anonymous: Something still seems fishy to me about decision problems that assume perfect predictors. If I’m being predicted with 100% accuracy in the XOR blackmail problem, then this means that I can induce a contradiction. If I follow FDT and CDT’s recommendation of never paying, then I only receive a letter when I have termites. But if I pay, then I must be in the world where I don’t have termites, as otherwise there is a contradiction.

So it seems that I am able to intervene on the world in a way that changes the state of termites for me now, given that I’ve received a letter. That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay. The weirdness arises because I’m able to intervene on the algorithm, but we are conditioning on a fact of the world that depends on my algorithm.

Not sure if this confusion makes sense to you. My gut says that these kinds of problems are often self-contradicting, at least when we assert 100% predictive performance. I would prefer to work it out from the ex ante situation, with specified probabilities of getting termites, and see if it is the case that changing one’s strategy (at the algorithm level) is possible without changing the probability of termites to maintain consistency of the prediction claim.

Nate: First, I’ll note that the problem goes through fine if the prediction is only correct 99% of the time. If the difference between “cost of termites” and “cost of paying” is sufficient high, then the problem can probably go through even if the predictor is only correct 51% of the time.

That said, the point of this example is to draw attention to some of the issues you’re raising here, and I think that these issues are just easier to think about when we assume 100% predictive accuracy.

The claim I dispute is this one: “That is, the best strategy when starting is to never pay, but the best strategy given that I will receive a letter is to pay.” I claim that the best strategy given that you receive the letter is to not pay, because whether you pay has no effect on whether or not you have termites. Whenever you pay, no matter what you’ve learned, you’re basically just burning $1000.

That said, you’re completely right that these decision problems have some inconsistent branches, though I claim that this is true of any decision problem. In a deterministic universe with deterministic agents, all “possible actions” the agent “could take” save one are not going to be taken, and thus all “possibilities” save one are in fact inconsistent given a sufficiently full formal specification.

I also completely endorse the claim that this set-up allows the predicted agent to induce a contradiction. Indeed, I claim that all decision-making power comes from the ability to induce contradictions: the whole reason to write an algorithm that loops over actions, constructs models of outcomes that would follow from those actions, and outputs the action corresponding to the highest-ranked outcome is so that it is contradictory for the algorithm to output a suboptimal action.

This is what computer programs are all about. You write the code in such a fashion that the only non-contradictory way for the electricity to flow through the transistors is in the way that makes your computer do your tax returns, or whatever.

In the case of the XOR blackmail problem, there are four “possible” worlds: LT (letter + termites), NT (noletter + termites), LN (letter + notermites), and NN (noletter + notermites).

The predictor, by dint of their accuracy, has put the universe into a state where the only consistent possibilities are either (LT, NN) or (LN, NT). You get to choose which of those pairs is consistent and which is contradictory. Clearly, you don’t have control over the probability of termites vs. notermites, so you’re only controlling whether you get the letter. Thus, the question is whether you’re willing to pay $1000 to make sure that the letter shows up only in the worlds where you don’t have termites.

Even when you’re holding the letter in your hands, I claim that you should not say “if I pay I will have no termites”, because that is false — your action can’t affect whether you have termites. You should instead say:

I see two possibilities here. If my algorithm outputs pay, then in the XX% of worlds where I have termites I get no letter and lose $1M, and in the (100-XX)% of worlds where I do not have termites I lose $1k. If instead my algorithm outputs refuse, then in the XX% of worlds where I have termites I get this letter but only lose $1M, and in the other worlds I lose nothing. The latter mixture is preferable, so I do not pay.

You’ll notice that the agent in this line of reasoning is not updating on the fact that they’re holding the letter. They’re not saying, “Given that I know that I received the letter and that the universe is consistent…”

One way to think about this is to imagine the agent as not yet being sure whether or not they’re in a contradictory universe. They act like this might be a world in which they don’t have termites, and they received the letter; and in those worlds, by refusing to pay, they make the world they inhabit inconsistent — and thereby make this very scenario never-have-existed.

And this is correct reasoning! For when the predictor makes their prediction, they’ll visualize a scenario where the agent has no termites and receives the letter, in order to figure out what the agent would do. When the predictor observes that the agent would make that universe contradictory (by refusing to pay), they are bound (by their own commitments, and by their accuracy as a predictor) to send the letter only when you have termites.5

You’ll never find yourself in a contradictory situation in the real world, but when an accurate predictor is trying to figure out what you’ll do, they don’t yet know which situations are contradictory. They’ll therefore imagine you in situations that may or may not turn out to be contradictory (like “letter + notermites”). Whether or not you would force the contradiction in those cases determines how the predictor will behave towards you in fact.

The real world is never contradictory, but predictions about you can certainly place you in contradictory hypotheticals. In cases where you want to force a certain hypothetical world to imply a contradiction, you have to be the sort of person who would force the contradiction if given the opportunity.

Or as I like to say — forcing the contradiction never works, but it always would’ve worked, which is sufficient.

Anonymous: The FDT algorithm is best ex ante. But if what you care about is your utility in your own life flowing after you, and not that of other instantiations, then upon hearing this news about FDT you should do whatever is best for you given that information and your beliefs, as per CDT.

A causal graph of Newcomb’s problem for FDT agents.6

Nate: If you have the ability to commit yourself to future behaviors (and actually stick to that), it’s clearly in your interest to commit now to behaving like FDT on all decision problems that begin in your future. I, for instance, have made this commitment myself. I’ve also made stronger commitments about decision problems that began in my past, but all CDT agents should agree in principle on problems that begin in the future.7

I do believe that real-world people like you and me can actually follow FDT’s prescriptions, even in cases where those prescriptions are quite counter-intuitive.

Consider a variant of Newcomb’s problem where both boxes are transparent, so that you can already see whether box B is full before choosing whether to two-box. In this case, EDT joins CDT in two-boxing, because one-boxing can no longer serve to give the agent good news about its fortunes. But FDT agents still one-box, for the same reason they one-box in Newcomb’s original problem and cooperate in the prisoner’s dilemma: they imagine their algorithm controlling all instances of their decision procedure, including the past copy in the mind of their predictor.

Now, let’s suppose that you’re standing in front of two full boxes in the transparent Newcomb problem. You might say to yourself, “I wish I could have committed beforehand, but now that the choice is before me, the tug of the extra $1000 is just too strong”, and then decide that you were not actually capable of making binding precommitments. This is fine; the normatively correct decision theory might not be something that all human beings have the willpower to follow in real life, just as the correct moral theory could turn out to be something that some people lack the will to follow.8

That said, I believe that I’m quite capable of just acting like I committed to act. I don’t feel a need to go through any particular mental ritual in order to feel comfortable one-boxing. I can just decide to one-box and let the matter rest there.

I want to be the kind of agent that sees two full boxes, so that I can walk away rich. I care more about doing what works, and about achieving practical real-world goals, than I care about the intuitiveness of my local decisions. And in this decision problem, FDT agents are the only agents that walk away rich.

One way of making sense of this kind of reasoning is that evolution graced me with a “just do what you promised to do” module. The same style of reasoning that allows me to actually follow through and one-box in Newcomb’s problem is the one that allows me to cooperate in prisoner’s dilemmas against myself — including dilemmas like “should I stick to my New Year’s resolution?”9 I claim that it was only misguided CDT philosophers that argued (wrongly) that “rational” agents aren’t allowed to use that evolution-given “just follow through with your promises” module.

Anonymous: A final point: I don’t know about counterlogicals, but a theory of functional similarity would seem to depend on the details of the algorithms.

E.g., we could have a model where their output is stochastic, but some parameters of that process are the same (such as expected value), and the action is stochastically drawn from some distribution with those parameter values. We could have a version of that, but where the parameter values depend on private information picked up since the algorithms split, in which case each agent would have to model the distribution of private info the other might have.

That seems pretty general; does that work? Is there a class of functional similarity that can not be expressed using that formulation?

Nate: As long as the underlying distribution can be an arbitrary Turing machine, I think that’s sufficiently general.

There are actually a few non-obvious technical hurdles here; namely, if agent A is basing their beliefs off of their model of agent B, who is basing their beliefs off of a model of agent A, then you can get some strange loops.

Consider for example the matching pennies problem: agent A and agent B will each place a penny on a table; agent A wants either HH or TT, and agent B wants either HT or TH. It’s non-trivial to ensure that both agents develop stable accurate beliefs in games like this (as opposed to, e.g., diving into infinite loops).

The technical solution to this is reflective oracle machines, a class of probabilistic Turing machines with access to an oracle that can probabilistically answer questions about any other machine in the class (with access to the same oracle).

The paper “Reflective Oracles: A Foundation for Classical Game Theory” shows how to do this and shows that the relevant fixed points always exist. (And furthermore, in cases that can be represented in classical game theory, the fixed points always correspond to the mixed-strategy Nash equilibria.)

This more or less lets us start from a place of saying “how do agents with probabilistic information about each other’s source code come to stable beliefs about each other?” and gets us to the “common knowledge of rationality” axiom from game theory.10 One can also see it as a justification for that axiom, or as a generalization of that axiom that works even in cases where the lines between agent and environment get blurry, or as a hint at what we should do in cases where one agent has significantly more computational resources than the other, etc.

But, yes, when we study these kinds of problems concretely at MIRI, we tend to use models where each agent models the other as a probabilistic Turing machine, which seems roughly in line with what you’re suggesting here.

 

  1. CDT prescribes defection in this dilemma, on the grounds that one’s action cannot cause the other agent to cooperate. FDT outperforms CDT in Newcomblike dilemmas like these, while also outperforming EDT in other dilemmas, such as the smoking lesion problem and XOR blackmail.
  2. The agent’s predisposition determines whether they will flee to Aleppo or stay in Damascus, and also determines Death’s prediction about their decision. This allows Death to inescapably pursue the agent, making flight pointless; but CDT agents can’t incorporate this fact into their decision-making.
  3. There are some fairly natural ways to cash out Murder Lesion where CDT accepts the problem and FDT forces a contradiction, but we decided not to delve into that interpretation in the paper.

    Tangentially, I’ll note that one of the most common defenses of CDT similarly turns on the idea that certain dilemmas are “unfair” to CDT. Compare, for example, David Lewis’ “Why Ain’cha Rich?

    It’s obviously possible to define decision problems that are “unfair” in the sense that they just reward or punish agents for having a certain decision theory. We can imagine a dilemma where a predictor simply guesses whether you’re implementing FDT, and gives you $1,000,000 if so. Since we can construct symmetric dilemmas that instead reward CDT agents, EDT agents, etc., these dilemmas aren’t very interesting, and can’t help us choose between theories.

    Dilemmas like Newcomb’s problem and Death in Damascus, however, don’t evaluate agents based on their decision theories. They evaluate agents based on their actions, and the task of the decision theory is to determine which action is best. If it’s unfair to criticize CDT for making the wrong choice in problems like this, then it’s hard to see on what grounds we can criticize any agent for making a wrong choice in any problem, since one can always claim that one is merely at the mercy of one’s decision theory.

  4. Our paper describes the XOR blackmail problem like so:

    An agent has been alerted to a rumor that her house has a terrible termite infestation, which would cost her $1,000,000 in damages. She doesn’t know whether this rumor is true. A greedy and accurate predictor with a strong reputation for honesty has learned whether or not it’s true, and drafts a letter:

    “I know whether or not you have termites, and I have sent you this letter iff exactly one of the following is true: (i) the rumor is false, and you are going to pay me $1,000 upon receiving this letter; or (ii) the rumor is true, and you will not pay me upon receiving this letter.”

    The predictor then predicts what the agent would do upon receiving the letter, and sends the agent the letter iff exactly one of (i) or (ii) is true. Thus, the claim made by the letter is true. Assume the agent receives the letter. Should she pay up?

    In this scenario, EDT pays the blackmailer, while CDT and FDT refuse to pay. See the “Cheating Death in Damascus” paper for more details.

  5. Ben Levinstein notes that this can be compared to backward induction in game theory with common knowledge of rationality. You suppose you’re at some final decision node which you only would have gotten to (as it turns out) if the players weren’t actually rational to begin with.
  6. FDT agents intervene on their decision function, “FDT(P,G)”. The CDT version replaces this node with “Predisposition” and instead intervenes on “Act”.
  7. Specifically, the CDT-endorsed response here is: “Well, I’ll commit to acting like an FDT agent on future problems, but in one-shot prisoner’s dilemmas that began in my past, I’ll still defect against copies of myself”.

    The problem with this response is that it can cost you arbitrary amounts of utility, provided a clever blackmailer wishes to take advantage. Consider the retrocausal blackmail dilemma in “Toward Idealized Decision Theory“:

    There is a wealthy intelligent system and an honest AI researcher with access to the agent’s original source code. The researcher may deploy a virus that will cause $150 million each in damages to both the AI system and the researcher, and which may only be deactivated if the agent pays the researcher $100 million. The researcher is risk-averse and only deploys the virus upon becoming confident that the agent will pay up. The agent knows the situation and has an opportunity to self-modify after the researcher acquires its original source code but before the researcher decides whether or not to deploy the virus. (The researcher knows this, and has to factor this into their prediction.)

    CDT pays the retrocausal blackmailer, even if it has the opportunity to precommit to do otherwise. FDT (which in any case has no need for precommitment mechanisms) refuses to pay. I cite the intuitive undesirability of this outcome to argue that one should follow FDT in full generality, as opposed to following CDT’s prescription that one should only behave in FDT-like ways in future dilemmas.

    The argument above must be made from a pre-theoretic vantage point, because CDT is internally consistent. There is no argument one could give to a true CDT agent that would cause it to want to use anything other than CDT in decision problems that began in its past.

    If examples like retrocausal blackmail have force (over and above the force of other arguments for FDT), it is because humans aren’t genuine CDT agents. We may come to endorse CDT based on its theoretical and practical virtues, but the case for CDT is defeasible if we discover sufficiently serious flaws in CDT, where “flaws” are evaluated relative to more elementary intuitions about which actions are good or bad. FDT’s advantages over CDT and EDT — properties like its greater theoretical simplicity and generality, and its achievement of greater utility in standard dilemmas — carry intuitive weight from a position of uncertainty about which decision theory is correct.

  8. In principle, it could even turn out that following the prescriptions of the correct decision theory in full generality is humanly impossible. There’s no law of logic saying that the normatively correct decision-making behaviors have to be compatible with arbitrary brain designs (including human brain design). I wouldn’t bet on this, but in such a case learning the correct theory would still have practical import, since we could still build AI systems to follow the normatively correct theory.
  9. A New Year’s resolution that requires me to repeatedly follow through on a promise that I care about in the long run, but would prefer to ignore in the moment, can be modeled as a one-shot twin prisoner’s dilemma. In this case, the dilemma is temporally extended, and my “twins” are my own future selves, who I know reason more or less the same way I do.

    It’s conceivable that I could go off my diet today (“defect”) and have my future selves pick up the slack for me and stick to the diet (“cooperate”), but in practice if I’m the kind of agent who isn’t willing today to sacrifice short-term comfort for long-term well-being, then I presumably won’t be that kind of agent tomorrow either, or the day after.

    Seeing that this is so, and lacking a way to force themselves or their future selves to follow through, CDT agents despair of promise-keeping and abandon their resolutions. FDT agents, seeing the same set of facts, do just the opposite: they resolve to cooperate today, knowing that their future selves will reason symmetrically and do the same.

  10. The paper above shows how to use reflective oracles with CDT as opposed to FDT, because (a) one battle at a time and (b) we don’t yet have a generic algorithm for computing logical counterfactuals, but we do have a generic algorithm for doing CDT-type reasoning.

The post Decisions are for making bad outcomes inconsistent appeared first on Machine Intelligence Research Institute.

Patient moves paralyzed legs with help from electrical stimulation of spinal cord

KurzweilAI - Fri, 04/07/2017 - 16:35

Electrical stimulation of the spinal cord (credit: Mayo Clinic)

Mayo Clinic researchers have used electrical stimulation of the spinal cord and intense physical therapy to help Jared Chinnock intentionally move his paralyzed legs, stand, and make steplike motions for the first time in three years. The chronic traumatic paraplegia case marks the first time a patient has intentionally controlled previously paralyzed functions within the first two weeks of stimulation.

The case was documented April 3, 2017 in an open-access paper in Mayo Clinic Proceedings. The researchers say these results offer further evidence that a combination of this technology and rehabilitation may help patients with spinal cord injuries regain control over previously paralyzed movements, such as steplike actions, balance control, and standing.

“We’re really excited, because our results went beyond our expectations,” says neurosurgeon Kendall Lee, M.D., Ph.D., principal investigator and director of Mayo Clinic’s Neural Engineering Laboratory. “These are initial findings, but the patient is continuing to make progress.”

Chinnock injured his spinal cord at the sixth thoracic vertebrae in the middle of his back three years earlier. He was diagnosed with a “motor complete spinal cord injury,” meaning he could not move or feel anything below the middle of his torso.

Electrical stimulation

The study started with the patient going through 22 weeks of physical therapy. He had three training sessions a week to prepare his muscles for attempting tasks during spinal cord stimulation, and was tested for changes regularly. Some results led researchers to characterize his injury further as “discomplete,” suggesting dormant connections across his injury may remain.

Following physical therapy, he underwent surgery to implant an electrode in the epidural space near the spinal cord below the injured area. The electrode is connected to a computer-controlled device under the skin in the patient’s abdomen that which sends electrical current to the spinal cord, enabling the patient to create movement.*

The data suggest that people with discomplete spinal cord injuries may be candidates for epidural stimulation therapy, but more research is needed into how a discomplete injury contributes to recovering function, the researchers note.

After a three-week recovery period from surgery, the patient resumed physical therapy with stimulation settings adjusted to enable movements. In the first two weeks, he intentionally was able to control his muscles while lying on his side, resulting in leg movements, make steplike motions while lying on his side and standing with partial support, and stand independently using his arms on support bars for balance. Intentional (volitional) movement means the patient’s brain is sending a signal to motor neurons in his spinal cord to move his legs purposefully. (credit: Mayo Clinic)

* The Mayo Clinic received permission from the FDA for off-label use.  The Mayo researchers worked closely with the team of V. Reggie Edgerton, Ph.D., at UCLA on this study, which replicates earlier research done at the University of Louisville. Teams from Mayo Clinic’s departments of Neurosurgery and Physical Medicine and Rehabilitation, and the Division of Engineering collaborated on this project. The research was funded by Craig H. Neilsen Foundation, Jack Jablonski BEL13VE in Miracles Foundation, Mayo Clinic Center for Clinical and Translational Sciences, Mayo Clinic Rehabilitation Medicine Research Center, Mayo Clinic Transform the Practice, and The Grainger Foundation.


Mayo Clinic | Researchers Strive to Help Paralyzed Man Make Strides – Mayo Clinic


Mayo Clinic |Epidural Stimulation Enables Motor Function After Chronic Paraplegia

Abstract of Enabling Task-Specific Volitional Motor Functions via Spinal Cord Neuromodulation in a Human With Paraplegia

We report a case of chronic traumatic paraplegia in which epidural electrical stimulation (EES) of the lumbosacral spinal cord enabled (1) volitional control of task-specific muscle activity, (2) volitional control of rhythmic muscle activity to produce steplike movements while side-lying, (3) independent standing, and (4) while in a vertical position with body weight partially supported, voluntary control of steplike movements and rhythmic muscle activity. This is the first time that the application of EES enabled all of these tasks in the same patient within the first 2 weeks (8 stimulation sessions total) of EES therapy.

Neural probes for the spinal cord

KurzweilAI - Fri, 04/07/2017 - 04:51

Researchers have developed a rubber-like fiber, shown here, that can flex and stretch while simultaneously delivering both optical impulses for optoelectronic stimulation,and electrical connections for stimulation and monitoring. (credit: Chi (Alice) Lu and Seongjun Park)

A research team led by MIT scientists has developed rubbery fibers for neural probes that can flex and stretch and be implanted into the mouse spinal cord.

The goal is to study spinal cord neurons and ultimately develop treatments to alleviate spinal cord injuries in humans. That requires matching the stretchiness, softness, and flexibility of the spinal cord. In addition, the fibers have to deliver optical impulses (for optoelectronic stimulation of neurons with blue or yellow laser light) and have electrical connections (for electrical stimulation and monitoring of neurons).

Implantable fibers have allowed brain researchers to stimulate specific targets in the brain and monitor electrical responses. But similar studies in the nerves of the spinal cord have been more difficult to carry out. That’s because the spine flexes and stretches as the body moves, and the relatively stiff, brittle fibers used today could damage the delicate spinal cord tissue.

The scientists used a newly developed elastomer (a tough elastic polymer material that can flow and be stretched) that is transparent (like a fiber optic cable) for transmitting optical signals, and formed an external mesh coating of silver nanowires as a conductive layer for electrical signals. Think of it as tough, transparent, silver spaghetti.

Fabrication of flexible neural probes. (A) Thermal (heat) drawing produced a flexible optical fiber that also served as a structural core for the probe. (B) Spool of a fiber transparent polycarbonate (PC) core and cyclic olefin copolymer (COC) cladding, which enabled the fiber to be drawn into a fiber and was dissolved away after the drawing process. (C) Transmission electron microscopy (TEM) image of silver nanowires (AgNW). (D) Cross-sectional image of the fiber probe with biocompatible polydimethylsiloxane (PDMS) coating. (E) Scanning electron microscopy image showing a portion of the ring silver nanowire electrode cross section. (F) Scanning electron microscopy image of the silver nanowire mesh on top of the fiber surface. (credit: Chi Lu et al./Science Advances)

The fibers are “so floppy, you could use them to do sutures and deliver light  at the same time,” says MIT Professor Polina Anikeeva. The fiber can stretch by at least 20 to 30 percent without affecting its properties, she says. “Eventually, we’d like to be able to use something like this to combat spinal cord injury. But first, we have to have biocompatibility and to be able to withstand the stresses in the spinal cord without causing any damage.”

Scientists doing research on spinal cord injuries or disease usually must use larger animals in their studies, because the larger nerve fibers can withstand the more rigid wires used for stimulus and recording. While mice are generally much easier to study and available in many genetically modified strains, there was previously no technology that allowed them to be used for this type of research.

The fibers are not only stretchable but also very flexible. (credit: Chi (Alice) Lu and Seongjun Park)

The team included researchers at the University of Washington and Oxford University. The research was supported by the National Science Foundation, the National Institute of Neurological Disorders and Stroke, the U.S. Army Research Laboratory, and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT.

Abstract of Flexible and stretchable nanowire-coated fibers for optoelectronic probing of spinal cord circuits

Studies of neural pathways that contribute to loss and recovery of function following paralyzing spinal cord injury require devices for modulating and recording electrophysiological activity in specific neurons. These devices must be sufficiently flexible to match the low elastic modulus of neural tissue and to withstand repeated strains experienced by the spinal cord during normal movement. We report flexible, stretchable probes consisting of thermally drawn polymer fibers coated with micrometer-thick conductive meshes of silver nanowires. These hybrid probes maintain low optical transmission losses in the visible range and impedance suitable for extracellular recording under strains exceeding those occurring in mammalian spinal cords. Evaluation in freely moving mice confirms the ability of these probes to record endogenous electrophysiological activity in the spinal cord. Simultaneous stimulation and recording is demonstrated in transgenic mice expressing channelrhodopsin 2, where optical excitation evokes electromyographic activity and hindlimb movement correlated to local field potentials measured in the spinal cord.

Astronomers detect atmosphere around Earth-like planet

KurzweilAI - Fri, 04/07/2017 - 01:17

Artist’s impression of atmosphere around super-Earth planet GJ 1132b (credit: MPIA)

Astronomers have detected an atmosphere around an Earth-like planet beyond our solar system for the first time: the super-Earth planet GJ 1132b in the Southern constellation Vela, at a distance of 39 light-years from Earth.

The team, led by Keele University’s John Southworth, PhD, used the 2.2 m ESO/MPG telescope in Chile to take images of the planet’s host star GJ 1132. The astronomers made the detection by measuring the slight decrease in brightness, finding that its atmosphere absorbed some of the starlight while transiting (passing in front of) the host star. Previous detections of exoplanet atmospheres all involved gas giants reminiscent of a high-temperature Jupiter.

Possible “water world”

“With this research, we have taken the first tentative step into studying the atmospheres of smaller, Earth-like, planets,” said Southworth. “We simulated a range of possible atmospheres for this planet, finding that those rich in water and/or methane would explain the observations of GJ 1132b. The planet is significantly hotter and a bit larger than Earth, so one possibility is that it is a ‘water world’ with an atmosphere of hot steam.”

Very low-mass stars are extremely common (much more so than Sun-like stars), and are known to host lots of small planets. But they also show a lot of magnetic activity, causing high levels of X-rays and ultraviolet light to be produced, which might completely evaporate the planets’ atmospheres. The properties of GJ 1132b show that an atmosphere can endure for a billion years without being destroyed, the astronomers say.

Given the huge number of very low-mass stars and planets, this could mean the conditions suitable for life are common in the Universe, the astronomers suggest.

The discovery, reported March 31 in Astronomical Journal, makes GJ 1132b one of the highest-priority targets for further study by current top facilities, such as the Hubble Space Telescope and ESO’s Very Large Telescope, as well as the James Webb Space Telescope, slated for launch in 2018.

The team also included astronomers at Luigi Mancini Max Planck Institute for Astronomy (MPIA), University of Rome, University of Cambridge, and Stockholm University.

April 2017 Newsletter

The Singularity Institute Blog - Thu, 04/06/2017 - 18:59

Our newest publication, “Cheating Death in Damascus,” makes the case for functional decision theory, our general framework for thinking about rational choice and counterfactual reasoning.

In other news, our research team is expanding! Sam Eisenstat and Marcello Herreshoff, both previously at Google, join MIRI this month.

Research updates

General updates

News and links

The post April 2017 Newsletter appeared first on Machine Intelligence Research Institute.

This contact lens could someday measure blood glucose and other signs of disease

KurzweilAI - Thu, 04/06/2017 - 04:18

Transparent biosensors in contact lenses (made visible in this artist’s rendition) could soon help track our health. (credit: Jack Forkey/Oregon State University)

Transparent biosensors embedded into contact lenses could soon allow doctors and patients to monitor blood glucose levels and many other telltale signs of disease from teardops without invasive tests, according to Oregon State University chemical engineering professor Gregory S. Herman, Ph.D. who presented his work Tuesday April 4, 2017 at the American Chemical Society (ACS) National Meeting & Exposition.

Herman and two colleagues previously invented a compound composed of indium gallium zinc oxide (IGZO). This semiconductor is the same one that has revolutionized electronics, providing higher resolution displays on televisions, smartphones and tablets while saving power and improving touch-screen sensitivity.

In his research, Herman’s goal was to find a way to help people with diabetes continuously monitor their blood glucose levels more efficiently using bio-sensing contact lenses. Continuous glucose monitoring — instead of the prick-and-test approach — helps reduce the risk of diabetes-related health problems. But most continuous glucose monitoring systems require inserting electrodes in various locations under the skin. This can be painful, and the electrodes can cause skin irritation or infections.

Herman says bio-sensing contact lenses could eliminate many of these problems and improve compliance since users can easily replace them on a daily basis. And, unlike electrodes on the skin, they are invisible, which could help users feel less self-conscious about using them.

A schematic illustration of an experimental device (credit: Du X et al./ ACS Applied Materials & Interfaces)

To test this idea, Herman and his colleagues first developed an inexpensive method to make IGZO electronics. Then, they used the approach to fabricate a biosensor containing a transparent sheet of IGZO field-effect transistors and glucose oxidase, an enzyme that breaks down glucose. When they added glucose to the mixture, the enzyme oxidized the blood sugar. As a result, the pH level in the mixture shifted and, in turn, triggered changes in the electrical current flowing through the IGZO transistor.

In conventional biosensors, these electrical changes would be used to measure the glucose concentrations in the interstitial fluid under a patient’s skin. But glucose concentrations are much lower in the eye. So any biosensors embedded into contact lenses will need to be far more sensitive. To address this problem, the researchers created nanostructures within the IGZO biosensor that were able to detect glucose concentrations much lower than found in tears.*

In theory, Herman says, more than 2,000 transparent biosensors — each measuring a different bodily function — could be embedded in a 1-millimeter square patch of an IGZO contact lens. Once developed, the biosensors could transmit vital health information to smartphones and other Wi-Fi or Bluetooth-enabled devices.

Herman’s team has already used the IGZO system in catheters to measure uric acid, a key indicator of kidney function, and is exploring the possibility of using it for early detection of cancer and other serious conditions. However, Herman says it could be a year or more before a prototype bio-sensing contact lens is ready for animal testing.

(credit: Google)

The concept appears similar to Goggle’s smart contact lens project, using a tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material, announced in 2014, but Herman says the Google design is more limited and that the research has stalled.

Herman acknowledges funding from the Juvenile Diabetes Research Foundation and the Northwest Nanotechnology Infrastructure, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation.

* “We have functionalized the back-channel of IGZO-FETs with aminosilane groups that are cross-linked to glucose oxidase and have demonstrated that these devices have high sensitivity to changes in glucose concentrations. Glucose sensing occurs through the decrease in pH during glucose oxidation, which modulates the positive charge of the aminosilane groups attached to the IGZO surface. The change in charge affects the number of acceptor-like surface states which can deplete electron density in the n-type IGZO semiconductor. Increasing glucose concentrations leads to an increase in acceptor states and a decrease in drain-source conductance due to a positive shift in the turn-on voltage. The functionalized IGZO-FET devices are effective in minimizing detection of interfering compounds including acetaminophen and ascorbic acid.” — Du XLi YMotley JRStickle WFHerman GS, Glucose Sensing Using Functionalized Amorphous In-Ga-Zn-O Field-Effect Transistors. ACS Applied Materials & Interfaces. 2016 03 30.

Abstract of Implantable indium gallium zinc oxide field effect biosensors

Amorphous indium gallium zinc oxide (IGZO) field effect transistors (FETs) are a promising technology for a wide range of electronic applications including implantable and wearable biosensors. We have recently developed novel, low-cost methods to fabricate IGZO-FETs, with a wide range of form factors. Attaching self-assembled monolayers (SAM) to the IGZO backchannel allows us to precisely control surface chemistry and improve stability of the sensors. Functionalizing the SAMs with enzymes provides excellent selectivity for the sensors, and effectively minimizes interference from acetaminophen/ascorbic acid. We have recently demonstrated that a nanostructured IGZO network can significantly improve sensitivity as a sensing transducer, compared to blanket IGZO films. In Figure (a) we show a scanning electron microscopy image of a nanostructured IGZO transducer located between two indium tin oxide source/drain electrodes. In Figure (b) we show an atomic force microscope image of the close packed hexagonal IGZO nanostructured network (3×3 mm2), and Figure (c) shows the corresponding height profile along the arrow shown in (b). We will discuss reasons for improved sensitivity for the nanostructured IGZO, and demonstrate high sensitivity for glucose sensing. Finally, fully transparent glucose sensors have been fabricated directly on catheters, and have been characterized by a range of techniques. These results suggest that IGZO-FETs may provide a means to integrate fully transparent, highly-sensitive sensors into contact lenses.

Mass production of low-cost, flexible 3-D printed electronics

KurzweilAI - Thu, 04/06/2017 - 02:46

A test flexible resistive memory 3-D printed on a polyimide foil (credit: Huber et al./Applied Physics Letters)

A group of researchers at Munich University of Applied Sciences in Germany and INRS-EMT in Canada is paving the way for mass-producing low-cost printable electronics by demonstrating a fully inkjet-printable flexible resistive memory.*

Additive manufacturing (commonly used in 3-D printing), allows for a streamlined process flow, replacing complex lithography (used in making chips), at the detriment of feature size, which od usually not critical for memory devices in less computationally demanding uses.

Inkjet printing allows for roll-to-roll printing, making possible mass-produced printable electronics. In an open-access paper appearing this week in Applied Physics Letters, from AIP Publishing, the group presents a proof of concept for using inkjet printing of resistive memory (ReRAM).

“We use functional inks to deposit a capacitor structure — conductor-insulator-conductor — with commercially available materials** that have already been deployed in cleanroom processes,” said Bernhard Huber, a doctoral student at INRS-EMT and working in the Laboratory for Microsystems Technology at Munich University of Applied Sciences. “This process is identical to that of an office inkjet printer, with an additional option of fine-tuning the droplet size and heating the target material.”

The process enables extremely low-cost flexible electronics and may lead to print-on-demand electronics, which shows huge potential for small, flexible lines of production and end-user products, the researchers suggest.

Examples include supermarkets printing their own smart tags, public transport providers customizing multifunctional tickets on demand, and wearables.

* Currently, computing devices use two different types of memory: a non-volatile but slow storage memory like Flash and a fast but volatile random access memory (RAM) like DRAM. Resistive RAM combines non-volatile behavior and fast read-and-write access in one device. The two memory states (0 and 1) are defined by the resistance of the memory cell.

** Silver/spin-on-glass (SOG)/poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) cells were fabricated by inkjet printing alone. The cells feature low switching voltages, low write currents, and a high ratio between high and low resistance state of 10,000.

Abstract of Fully inkjet printed flexible resistive memory

Resistively switching memory cells (ReRAM) are strong contenders for next-generation non-volatile random access memories. In this paper, we present ReRAM cells on flexible substrates consisting of Ag/spin-on-glass/PEDOT:PSS (poly(3,4-ethylenedioxythiophene) polystyrene sulfonate).

The complete cell is fabricated using a standard inkjet printer without additional process steps. Investigations on the spin-on-glass insulating layer showed that low sintering temperatures are sufficient for good switching behavior, providing compatibility with various foils. The cells feature low switching voltages, low write currents, and a high ratio between high and low resistance state of 104. Combined with excellent switching characteristics under bending conditions, these results pave the way for low-power and low-cost memory devices for future applications in flexible electronics.

Magnetically storing a bit on a single atom — the ultimate future data storage

KurzweilAI - Tue, 04/04/2017 - 03:31

Dysprosium atoms (green) on the surface of nanoparticles can be magnetized in one of two possible directions: “spin up” or “spin down.” (credit: ETH Zurich / Université de Rennes)

Imagine you could store a bit on a single atom or small molecule — the ultimate magnetic data-storage system. An international team of researchers led by chemists from ETH Zurich has taken a step toward that idea by depositing single magnetizable atoms onto a silica surface, with the atoms retaining their magnetism.

In theory, certain atoms can be magnetized in one of two possible directions: “spin up” or “spin down” (representing zero or one); information could then be stored and read based on the sequence of the molecules’ magnetic spin directions. But finding molecules that can store the magnetic information permanently is a challenge, and it’s even more difficult to arrange these molecules on a solid surface to build actual data storage devices.

Magnetizing atoms on nanoparticles

Strategy for immobilization of dysprosium atoms (blue, surrounded by molecular scaffold) on a silica nanoparticle surface, based on a grafting step (a) and a thermolytic (chemical decomposition caused by heat) step (b) (credit: Florian Allouche et al./ ACS Central Science)

Nonetheless, Christophe Copéret, a professor at the Laboratory of Inorganic Chemistry at ETH Zurich, and his team have developed a method using a dysprosium atom (dysprosium is a metal belonging to the rare-earth elements). The atom is surrounded by a molecular scaffold that serves as a vehicle. The scientists also developed a method for depositing such molecules on the surface of silica nanoparticles and fusing them by annealing (heating) at 400 degrees Celsius.

The scaffold molecular structure disintegrates in the process, yielding nanoparticles with dysprosium atoms well-dispersed at the surface. The scientists showed that these atoms can then be magnetized and that they maintain their magnetic information.

One advantage of their new method is its simplicity. Nanoparticles bonded with dysprosium can be made in any chemical laboratory. No cleanroom and complex equipment required. And the magnetizable nanoparticles can be stored at room temperature and re-utilized.

Their magnetization process currently only works at around minus 270 degrees Celsius (near absolute zero), and the magnetization can only be maintained for up to one and a half minutes. So the scientists are now looking for methods that will allow the magnetization to be stabilized at higher temperatures and for longer periods of time. They are also looking for ways to fuse atoms to a flat surface instead of to spherical nanoparticles.

Other preparation methods also involve direct deposition of individual atoms onto a surface, but the materials are only stable at very low temperatures, mainly due to the agglomeration of these individual atoms. Alternatively, molecules with ideal magnetic properties can be deposited onto a surface, but this immobilization often negatively affects the structure and the magnetic properties of the final object.

Scientists from the Universities of Lyon and Rennes, Collège de France in Paris, Paul Scherrer Institute in Switzerland, and Berkeley National Laboratory were involved in the research.

Abstract of Magnetic Memory from Site Isolated Dy(III) on Silica Materials

Achieving magnetic remanence at single isolated metal sites dispersed at the surface of a solid matrix has been envisioned as a key step toward information storage and processing in the smallest unit of matter. Here, we show that isolated Dy(III) sites distributed at the surface of silica nanoparticles, prepared with a simple and scalable two-step process, show magnetic remanence and display a hysteresis loop open at liquid 4He temperature, in contrast to the molecular precursor which does not display any magnetic memory. This singular behavior is achieved through the controlled grafting of a tailored Dy(III) siloxide complex on partially dehydroxylated silica nanoparticles followed by thermal annealing. This approach allows control of the density and the structure of isolated, “bare” Dy(III) sites bound to the silica surface. During the process, all organic fragments are removed, leaving the surface as the sole ligand, promoting magnetic remanence.

The next agricultural revolution: a ‘bionic leaf’ that could help feed the world

KurzweilAI - Tue, 04/04/2017 - 01:28

The radishes on the right were grown with the help of a bionic leaf that produces fertilizer with bacteria, sunlight, water, and air. (credit: Nocera lab, Harvard University)

Harvard University chemists have invented a new kind of “bionic” leaf that uses bacteria, sunlight, water, and air to make fertilizer right in the soil where crops are grown. It could make possible a future low-cost commercial fertilizer for poorer countries in the emerging world.

The invention deals with the renewed challenge of feeding the world as the population continues to balloon.* “When you have a large centralized process and a massive infrastructure, you can easily make and deliver fertilizer,” Daniel Nocera, Ph.D., says. “But if I said that now you’ve got to do it in a village in India onsite with dirty water — forget it. Poorer countries in the emerging world don’t always have the resources to do this. We should be thinking of a distributed system because that’s where it’s really needed.”

The research was presented at the national meeting of the American Chemical Society (ACS) today (April 3, 2017). The new bionic leaf builds on a previous Nocera-team invention: the “artificial leaf” — a device that mimics photosynthesis: When exposed to sunlight, it mimics a natural leaf by splitting water into hydrogen and oxygen. These two gases would be stored in a fuel cell, which can use those two materials to produce electricity from inexpensive materials.

That was followed by “bionic leaf 2.0,” a water-splitting system that carbon dioxide out of the air and uses solar energy plus hydrogen-eating Ralstonia eutropha bacteria to produce liquid fuel with 10 percent efficiency, compared to the 1 percent seen in the fastest-growing plants. It provided biomass and liquid fuel yields that greatly exceeded those from natural photosynthesis.

Fertilizer created from sunlight + water + carbon dioxide and nitrogen from the air

For the new “bionic leaf,” Nocera’s team has designed a system in which bacteria use hydrogen from the water split by the artificial leaf plus carbon dioxide from the atmosphere to make a bioplastic that the bacteria store inside themselves as fuel. “I can then put the bug [bacteria] in the soil because it has already used the sunlight to make the bioplastic,” Nocera says. “Then the bug pulls nitrogen from the air and uses the bioplastic, which is basically stored hydrogen, to drive the fixation cycle to make ammonia for fertilizing crops.”

The researchers have used their approach to grow five crop cycles of radishes. The vegetables receiving the bionic-leaf-derived fertilizer weigh 150 percent more than the control crops. The next step, Nocera says, is to boost throughput so that one day, farmers in India or sub-Saharan Africa can produce their own fertilizer with this method.

Nocera said a paper describing the new system will be submitted for publication in about six weeks.

* The first “green revolution” in the 1960s saw the increased use of fertilizer on new varieties of rice and wheat, which helped double agricultural production. Although the transformation resulted in some serious environmental damage, it potentially saved millions of lives, particularly in Asia, according to the United Nations (U.N.) Food and Agriculture Organization. But the world’s population continues to grow and is expected to swell by more than 2 billion people by 2050, with much of this growth occurring in some of the poorest countries, according to the U.N. Providing food for everyone will require a multi-pronged approach, but experts generally agree that one of the tactics will have to involve boosting crop yields to avoid clearing even more land for farming.


American Chemical Society | A ‘bionic leaf’ could help feed the world

Two new researchers join MIRI

The Singularity Institute Blog - Sat, 04/01/2017 - 03:46

MIRI’s research team is growing! I’m happy to announce that we’ve hired two new research fellows to contribute to our work on AI alignment: Sam Eisenstat and Marcello Herreshoff, both from Google.

 

Sam Eisenstat studied pure mathematics at the University of Waterloo, where he carried out research in mathematical logic. His previous work was on the automatic construction of deep learning models at Google.

Sam’s research focus is on questions relating to the foundations of reasoning and agency, and he is especially interested in exploring analogies between current theories of logical uncertainty and Bayesian reasoning. He has also done work on decision theory and counterfactuals. His past work with MIRI includes “Asymptotic Decision Theory,” “A Limit-Computable, Self-Reflective Distribution,” and “A Counterexample to an Informal Conjecture on Proof Length and Logical Counterfactuals.”

 

Marcello Herreshoff studied at Stanford, receiving a B.S. in Mathematics with Honors and getting two honorable mentions in the Putnam Competition, the world’s most highly regarded university-level math competition. Marcello then spent five years as a software engineer at Google, gaining a background in machine learning.

Marcello is one of MIRI’s earliest research collaborators, and attended our very first research workshop alongside Eliezer Yudkowsky, Paul Christiano, and Mihály Bárász. Marcello has worked with us in the past to help produce results such as “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem,” “Definability of Truth in Probabilistic Logic,” and “Tiling Agents for Self-Modifying AI.” His research interests include logical uncertainty and the design of reflective agents.

 

Sam and Marcello will be starting with us in the first two weeks of April. This marks the beginning of our first wave of new research fellowships since 2015, though we more recently added Ryan Carey to the team on an assistant research fellowship (in mid-2016).

We have additional plans to expand our research team in the coming months, and will soon be hiring for a more diverse set of technical roles at MIRI — details forthcoming!

The post Two new researchers join MIRI appeared first on Machine Intelligence Research Institute.

This advance could finally make graphene-based semiconductor chips feasible

KurzweilAI - Sat, 04/01/2017 - 01:01

Atomic force microscopy images of as-deposited (left) and laser-annealed (right) reduced graphene oxide (rGO) thin films. The entire “pulsed laser annealing” process is done at room temperature and atmospheric pressure, using high-power laser pulses to convert p-type rGO material into n-type and completed in about one fifth of a microsecond. (credit: Anagh Bhaumik and Jagdish Narayan/Journal of Applied Physics)

Researchers at North Carolina State University (NC State) have developed a layered material that can be used to develop transistors based on graphene — a long-sought goal in the electronics industry.

Graphene has attractive properties, such as extremely high conductivity, meaning it conducts the flow of electrical current really well (compared to copper, for example), but it’s not a semiconductor, so it can’t work in a transistor (aside from providing great connections). A form of graphene called “graphene oxide” is a semiconductor, but it does not conduct well.

However, a form of graphene oxide called “reduced graphene oxide” (rGO) does conduct well*. Despite that, rGO still can’t function in a transistor. That’s because the design of a transistor is based on creating a junction between two materials: one that is positively charged (p-type) and one that is negatively charged (n-type), and native rGO is only a p-type.

The NC State researchers’ solution was to use high-powered laser pulses to disrupt chemical groups on an rGO thin film. This disruption moved electrons from one group to another, effectively converting p-type rGO to n-type rGO. They then used the two forms of rGO as two layers (a layer of n-type rGO on the surface and a layer of p-type rGO underneath) — creating a layered thin-film material that could be used to develop rGO-based transistors for use in future semiconductor chips.

The researchers were also able to integrate the rGO-based transistors onto sapphire and silicon wafers across the entire wafer.

The paper was published in the Journal of Applied Physics. The work was done with support from the National Science Foundation.

* Reduction is a chemical reaction that involves the gaining of electrons.

Abstract of Conversion of p to n-type reduced graphene oxide by laser annealing at room temperature and pressure

Physical properties of reduced graphene oxide (rGO) are strongly dependent on the ratio of sp2 to sp3hybridized carbon atoms and the presence of different functional groups in its structural framework. This research for the very first time illustrates successful wafer scale integration of graphene-related materials by a pulsed laser deposition technique, and controlled conversion of p to n-type 2D rGO by pulsed laser annealing using a nanosecond ArF excimer laser. Reduced graphene oxide is grown onto c-sapphire by employing pulsed laser deposition in a laser MBE chamber and is intrinsically p-type in nature. Subsequent laser annealing converts p into n-type rGO. The XRD, SEM, and Raman spectroscopy indicate the presence of large-area rGO onto c-sapphire having Raman-active vibrational modes: D, G, and 2D. High-resolution SEM and AFM reveal the morphology due to interfacial instability and formation of n-type rGO. Temperature-dependent resistance data of rGO thin films follow the Efros-Shklovskii variable-range-hopping model in the low-temperature region and Arrhenius conduction in the high-temperature regime. The photoluminescence spectra also reveal less intense and broader blue fluorescence spectra, indicating the presence of miniature sized sp2 domains in the vicinity of π* electronic states, which favor the VRH transport phenomena. The XPS results reveal a reduction of the rGO network after laser annealing with the C/O ratio measuring as high as 23% after laser-assisted reduction. The p to n-type conversion is due to the reduction of the rGO framework which also decreases the ratio of the intensity of the D peak to that of the G peak as it is evident from the Raman spectra. This wafer scale integration of rGO with c-sapphire and p to n-type conversion employing a laser annealing technique at room temperature and pressure will be useful for large-area electronic devices and will open a new frontier for further extensive research in graphene-based functionalized 2D materials.

Scientists grow beating heart tissue on spinach leaves

KurzweilAI - Fri, 03/31/2017 - 09:46

(credit: Worcester Polytechnic Institute)

A research team headed by Worcester Polytechnic Institute (WPI) scientists* has solved a major tissue engineering problem holding back the regeneration of damaged human tissues and organs: how to grow small, delicate blood vessels, which are beyond the capabilities of 3D printing.**

The researchers used plant leaves as scaffolds (structures) in an attempt to create the branching network of blood vessels — down to the capillary scale — required to deliver the oxygen, nutrients, and essential molecules required for proper tissue growth.

In a series of unconventional experiments, the team cultured beating human heart cells on spinach leaves that were stripped of plant cells.*** The researchers first decellularized spinach leaves (removed cells, leaving only the veins) by perfusing (flowing) a detergent solution through the leaves’ veins. What remained was a framework made up primarily of biocompatible cellulose, which is already used in a wide variety of regenerative medicine applications, such as cartilage tissue engineering, bone tissue engineering, and wound healing.

A spinach leaf (left) was decellularized in 7 days, leaving only the scaffold (right), which served as an intact vascular network. As a test, red dye was pumped through its veins, simulating blood, oxygen, and nutrients. Cardiomyocytes (cardiac muscle cells) derived from human pluripotent stem cells were then seeded onto the surface of the leaf scaffold, forming cell clusters that demonstrated cardiac contractile function and calcium-handling capabilities for 21 days. (credit: Worcester Polytechnic Institute)

After testing the spinach vascular (leaf vessel structure) system mechanically by flowing fluids and microbeads similar in size to human blood cells through it, the researchers seeded the vasculature with human umbilical vein endothelial cells (HUVECs) to grow endothelial cells (which line blood vessels).

Human mesenchymal stem cells (hMSC) and human pluripotent stem-cell-derived cardiomyocytes (cardiac muscle cells) (hPS-CM) were then seeded to the outer surfaces of  the plant scaffolds. The cardiomyocytes spontaneously demonstrated cardiac contractile function (beating) and calcium-handling capabilities over the course of 21 days.

The decellurize-recellurize process (credit: Joshua R. Gershlak et al./Biomaterials)

The future of ”crossing kingdoms”

These proof-of-concept studies may open the door to using multiple spinach leaves to grow layers of healthy heart muscle, and a potential tissue engineered graft based upon the plant scaffolds could use multiple leaves, where some act as arterial support and some act as venous return of blood and fluids from human tissue, say the researchers.

“Our goal is always to develop new therapies that can treat myocardial infarction, or heart attacks,” said Glenn Gaudette, PhD, professor of biomedical engineering at WPI and corresponding author of an open-access paper in the journal Biomaterials, published online in advance of the May 2017 issue.

“Unfortunately, we are not doing a very good job of treating them today. We need to improve that. We have a lot more work to do, but so far this is very promising.”

Currently, it’s not clear how the plant vasculature would be integrated into the native human vasculature and whether there would be an immune response, the authors advise.

The researchers are also now optimizing the decellularization process and seeing how well various human cell types grow while they are attached to (and potentially nourished by) various plant-based scaffolds that could be adapted for specialized tissue regeneration studies. “The cylindrical hollow structure of the stem of Impatiens capensis might better suit an arterial graft,” the authors note. “Conversely, the vascular columns of wood might be useful in bone engineering due to their relative strength and geometries.”

Other types of plants could also provide the framework for a wide range of other tissue engineering technologies, the authors suggest.****

The authors conclude that “development of decellularized plants for scaffolding opens up the potential for a new branch of science that investigates the mimicry between kingdoms, e.g., between plant and animal. Although further investigation is needed to understand future applications of this new technology, we believe it has the potential to develop into a ‘green’ solution pertinent to a myriad of regenerative medicine applications.”

* The research team also includes human stem cell and plant biology researchers at the University of Wisconsin-Madison, and Arkansas State University-Jonesboro.

** The research is driven by the pressing need for organs and tissues available for transplantation, which far exceeds their availability. More than 100,000 patients are on the donor waiting list at any given time and an average of 22 people die each day while waiting for a donor organ or tissue to become available, according to a 2016 paper in the American Journal of Transplantation

*** In addition to spinach leaves, the team successfully removed cells from parsley, Artemesia annua (sweet wormwood), and peanut hairy roots.

**** “Tissue engineered scaffolds are typically produced either from animal-derived or synthetic biomaterials, both of which have a large cost and large environmental impact. Animal-derived biomaterials used extensively as scaffold materials for tissue engineering include native [extracellular matrix]  proteins such as collagen I or fibronectin and whole animal tissues and organs. Annually, 115 million animals are estimated to be used in research. Due to this large number, a lot of energy is necessary for the upkeep and feeding of such animals as well as to dispose of the large amount of waste that is generated. Along with this environmental impact, animal research also has a plethora of ethical considerations, which could be alleviated by forgoing animal models in favor of more biologically relevant in vitro human tissue models,” the authors advise.

Worcester Polytechnic Institute | Spinach leaves can carry blood to grow human tissues

 

Global night-time lights provide unfiltered data on human activities and socio-economic factors

KurzweilAI - Thu, 03/30/2017 - 03:48

Night-time lights seen from space correlate to everything from electricity consumption and CO2 emissions, to gross domestic product, population and poverty. (credit: NASA)

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Environmental Defense Fund (EDF) have developed an online tool that incorporates 21 years of night-time lights data to understand and compare changes in human activities in countries around the world.

The research is published in PLOS One.

The tool compares the brightness of a country’s night-time lights with the corresponding electricity consumption, GDP, population, poverty, and emissions of CO2, CH4, N2O, and F-gases since 1992, without relying on national statistics with often differing methodologies and motivations by those collecting them.

Consistent with previous research, the team found the highest correlations between night-time lights and GDP, electricity consumption, and CO2 emissions. Correlations with population, N2O, and CH4 emissions were still slightly less pronounced and, as expected, there was an inverse correlation between the brightness of lights and of poverty.

“This is the most comprehensive tool to date to look at the relationship between night-time lights and a series of socio-economic indicators,” said Gernot Wagner, a research associate at SEAS and coauthor of the paper.

The data source is the Defense Meteorological Satellite Program (DMSP) dataset, providing 21 years worth of night-time data. The researchers also use Google Earth Engine (GEE), a platform recently made available to researchers that allows them to explore more comprehensive global aggregate relationships at national scales between DMSP and a series of economic and environmental variables.

Abstract of Night-time lights: A global, long term look at links to socio-economic trends

We use a parallelized spatial analytics platform to process the twenty-one year totality of the longest-running time series of night-time lights data—the Defense Meteorological Satellite Program (DMSP) dataset—surpassing the narrower scope of prior studies to assess changes in area lit of countries globally. Doing so allows a retrospective look at the global, long-term relationships between night-time lights and a series of socio-economic indicators. We find the strongest correlations with electricity consumption, CO2 emissions, and GDP, followed by population, CH4 emissions, N2O emissions, poverty (inverse) and F-gas emissions. Relating area lit to electricity consumption shows that while a basic linear model provides a good statistical fit, regional and temporal trends are found to have a significant impact.

Graphene-based neural probe detects brain activity at high resolution and signal quality

KurzweilAI - Wed, 03/29/2017 - 09:25

16 flexible graphene transistors (inset) integrated into a flexible neural probe enable electrical signals from neurons to be measured at high resolution and signal quality. (credit: ICN2)

Researchers from the European Graphene Flagship* have developed a new microelectrode array neural probe based on graphene field-effect transistors (FETs) for recording brain activity at high resolution while maintaining excellent signal-to-noise ratio (quality).

The new neural probe could lay the foundation for a future generation of in vivo neural recording implants, for patients with epilepsy, for example, and for disorders that affect brain function and motor control, the researchers suggest. It could possibly play a role in Elon Musk’s just-announced Neuralink “neural lace” research project.

Measuring neural activity with high precision

(Left) Representation of the graphene implant placed on the surface of the rat’s brain. (Right) microscope image of a multielectrode array with conventional platinum electrodes (a) vs. the miniature graphene device next to it (b). Scale bar is 1.25 mm. (credit:  Benno M. Blaschke et al./ 2D Mater.)

Neural activity is measured by detecting the electric fields generated when neurons fire. These fields are highly localized, so ultra-small measuring devices that can be densely packed are required for accurate brain readings.

The new device has an microelectrode array of 16 graphene-based transistors arranged on a flexible substrate that can conform to the brain’s surface. Graphene provides biocompatibility, chemical stability, flexibility, and excellent electrical properties, which make it attractive for use in medical devices, especially for brain activity, the researchers suggest.**

(For a state-of-the-art example of microelectrode array use in the brain, see “Brain-computer interface advance allows paralyzed people to type almost as fast as some smartphone users.”)

Schematic of the head of a graphene implant showing a graphene transistor array and feed lines. (Inset): cross section of a graphene transistor with graphene between the source and drain contacts, which are covered by an insulating polyimide photoresist. (credit:  Benno M. Blaschke et al./ 2D Mater.)

In an experiment with rats, the researchers used the new devices to record brain activity during sleep and in response to visual light stimulation.

The graphene transistor probes showed good spatial discrimination (identifying specific locations) of the brain activity and outperformed state-of-the-art platinum electrode arrays, with higher signal amplification and a better signal-to-noise performance when scaled down to very small sizes.

That means the graphene transistor probes can be more densely packed and at higher resolution, features that are vital for precision mapping of brain activity. And since the probes have transistor amplifiers built in, they remove the need for the separate pre-amplification required with metal electrodes.

Neural probes are placed directly on the surface of the brain, so safety is important. The researchers determined that the flexible graphene-based probes are non-toxic, did not induce any significant inflammation, and are long-lasting.

“Graphene neural interfaces have shown already a great potential, but we have to improve on the yield and homogeneity of the device production in order to advance towards a real technology,” said Jose Antonio Garrido, who led the research at the Catalan Institute of Nanoscience and Nanotechnology in Spain.

“Once we have demonstrated the proof of concept in animal studies, the next goal will be to work towards the first human clinical trial with graphene devices during intraoperative mapping of the brain. This means addressing all regulatory issues associated to medical devices such as safety, biocompatibility, etc.”

The research was published in the journal 2D Materials.

* With a budget of €1 billion, the Graphene Flagship consortium consists of more than 150 academic and industrial research groups in 23 countries. Launched in 2013, the goal is to take graphene from the realm of academic laboratories into European society within 10 years. The research was a collaborative effort involving Flagship partners Technical University of Munich (TU Munich. Germany), Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS, Spain), Spanish National Research Council (CSIC, Spain), The Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN, Spain) and the Catalan Institute of Nanoscience and Nanotechnology (ICN2, Spain).

** “Using multielectrode arrays for high-density recordings presents important drawbacks. Since the electrode impedance and noise are inversely proportional to the electrode size, a trade-off between spatial resolution and signal-to-noise ratio has to be made. Further, the very small voltages of the recorded signals are highly susceptible to noise in the standard electrode configuration. [That requires preamplification, which means] the fabrication complexity is significantly increased and the additional electrical components required for the voltage-to-current conversion limit the integration density. … Metal-oxide-semiconductor field-effect transistors (MOSFETs) where the gate metal is replaced with an electrolyte and an electrode, referred to as “solution-gated field-effect transistors (SGFETs) or electrolyte-gated field-effect transistors, can be exposed directly to neurons and be used to record action potentials with high fidelity. … Although the potential of graphene-based SGFET technology has been suggested in in vitro studies, so far no in vivo confirmation has been demonstrated. Here we present the fabrication of flexible arrays of graphene SGFETs and demonstrate in vivo mapping of spontaneous slow waves, as well as visually evoked and pre-epileptic activity in the rat.” — Benno M. Blaschke et al./2D Mater.

Abstract of Mapping brain activity with flexible graphene micro-transistors

Establishing a reliable communication interface between the brain and electronic devices is of paramount importance for exploiting the full potential of neural prostheses. Current microelectrode technologies for recording electrical activity, however, evidence important shortcomings, e.g. challenging high density integration. Solution-gated field-effect transistors (SGFETs), on the other hand, could overcome these shortcomings if a suitable transistor material were available. Graphene is particularly attractive due to its biocompatibility, chemical stability, flexibility, low intrinsic electronic noise and high charge carrier mobilities. Here, we report on the use of an array of flexible graphene SGFETs for recording spontaneous slow waves, as well as visually evoked and also pre-epileptic activity in vivo in rats. The flexible array of graphene SGFETs allows mapping brain electrical activity with excellent signal-to-noise ratio (SNR), suggesting that this technology could lay the foundation for a future generation of in vivo recording implants.

2016 in review

The Singularity Institute Blog - Wed, 03/29/2017 - 02:27

It’s time again for my annual review of MIRI’s activities.1 In this post I’ll provide a summary of what we did in 2016, see how our activities compare to our previously stated goals and predictions, and reflect on how our strategy this past year fits into our mission as an organization. We’ll be following this post up in April with a strategic update for 2017.

After doubling the size of the research team in 2015,2 we slowed our growth in 2016 and focused on integrating the new additions into our team, making research progress, and writing up a backlog of existing results.

2016 was a big year for us on the research front, with our new researchers making some of the most notable contributions. Our biggest news was Scott Garrabrant’s logical inductors framework, which represents by a significant margin our largest progress to date on the problem of logical uncertainty. We additionally released “Alignment for Advanced Machine Learning Systems” (AAMLS), a new technical agenda spearheaded by Jessica Taylor.

We also spent this last year engaging more heavily with the wider AI community, e.g., through the month-long Colloquium Series on Robust and Beneficial Artificial Intelligence we co-ran with the Future of Humanity Institute, and through talks and participation in panels at many events through the year.

 

2016 Research Progress

We saw significant progress this year in our agent foundations agenda, including Scott Garrabrant’s logical inductor formalism (which represents possibly our most significant technical result to date) and related developments in Vingean reflection. At the same time, we saw relatively little progress in error tolerance and value specification, which we had planned to put more focus on in 2016. Below, I’ll note the highlights from each of our research areas:

Logical Uncertainty and Naturalized Induction
  • 2015 progress: sizable. (Predicted: modest.)
  • 2016 progress: sizable. (Predicted: sizable.)

We saw a large body of results related to logical induction. Logical induction developed out of earlier work led by Scott Garrabrant in late 2015 (written up in April 2016) that served to divide the problem of logical uncertainty into two subproblems. Scott demonstrated that both problems could be solved at once using an algorithm that satisfies a highly general “logical induction criterion.”

This criterion provides a simple way of understanding idealized reasoning under resource limitations. In Andrew Critch’s words, logical induction is “a financial solution to the computer science problem of metamathematics”: a procedure that assigns reasonable probabilities to arbitrary (empirical, logical, mathematical, self-referential, etc.) sentences in a way that outpaces deduction, explained by analogy to inexploitable stock markets.

Our other main 2016 work in this domain is an independent line of research spearheaded by MIRI research associate Vadim Kosoy, “Optimal Polynomial-Time Estimators: A Bayesian Notion of Approximation Algorithm.” Vadim approaches the problem of logical uncertainty from a more complexity-theoretic angle of attack than logical induction does, providing a formalism for defining optimal feasible approximations of computationally infeasible objects that retain a number of relevant properties of those objects.

Decision Theory
  • 2015 progress: modest. (Predicted: modest.)
  • 2016 progress: modest. (Predicted: modest.)

We continue to see a steady stream of interesting results related to the problem of defining logical counterfactuals. In 2016, we began applying the logical inductor framework to decision-theoretic problems, working with the idea of universal inductors. Andrew Critch also developed a game-theoretic method for resolving policy disagreements that outperforms standard compromise approaches and also allows for negotiators to disagree on factual questions.

We have a backlog of many results to write up in this space. Our newest, “Cheating Death in Damascus,” summarizes the case for functional decision theory, a theory that systematically outperforms the conventional academic views (causal and evidential decision theory) in decision theory and game theory. This is the basic framework we use for studying logical counterfactuals and related open problems, and is a good introductory paper for understanding our other work in this space.

For an overview of our more recent work on this topic, see Tsvi Benson-Tilsen’s decision theory index on the research forum.

Vingean Reflection
  • 2015 progress: modest. (Predicted: modest.)
  • 2016 progress: modest-to-strong. (Predicted: limited.)

Our main results in reflective reasoning last year concerned self-trust in logical inductors. After seeing no major advances in Vingean reflection for many years—the last big step forward was perhaps Benya Fallenstein’s model polymorphism proposal in late 2012—we had planned to de-prioritize work on this problem in 2016, on the assumption that other tools were needed before we could make much more headway. However, in 2016 logical induction turned out to be surprisingly useful for solving a number of outstanding tiling problems.

As described in “Logical Induction,” logical inductors provide a simple demonstration of self-referential reasoning that is highly general and accurate, is free of paradox, and assigns reasonable credence to the reasoner’s own beliefs. This provides some evidence that the problem of logical uncertainty itself is relatively central to a number of puzzles concerning the theoretical foundations of intelligence.

Error Tolerance
  • 2015 progress: limited. (Predicted: modest.)
  • 2016 progress: limited. (Predicted: modest.)

2016 saw the release of our “Alignment for Advanced ML Systems” research agenda, with a focus on error tolerance and value specification. Less progress occurred in these areas than expected, partly because investigations here are still very preliminary. We also spent less time on research in mid-to-late 2016 overall than we had planned, in part because we spent a lot of time writing up our new results and research proposals.

Nate noted in our October AMA that he considers this time investment in drafting write-ups one of our main 2016 errors, and we plan to spend less time on paper-writing in 2017.

Our 2016 work on error tolerance included “Two Problems with Causal-Counterfactual Utility Indifference” and some time we spent discussing and critiquing Dylan Hadfield-Menell’s proposal of corrigibility via CIRL. We plan to share our thoughts on the latter line of research more widely later this year.

Value Specification
  • 2015 progress: limited. (Predicted: limited.)
  • 2016 progress: weak-to-modest. (Predicted: modest.)

Although we planned to put more focus on value specification last year, we ended up making less progress than expected. Examples of our work in this area include Jessica Taylor and Ryan Carey’s posts on online learning, and Jessica’s analysis of how errors might propagate within a system of humans consulting one another.

 

We’re extremely pleased with our progress on the agent foundations agenda over the last year, and we’re hoping to see more progress cascading from the new set of tools we’ve developed. At the same time, it remains to be seen how tractable the new set of problems we’re tackling in the AAMLS agenda are.

 

2016 Research Support Activities

In September, we brought on Ryan Carey to support Jessica’s work on the AAMLS agenda as an assistant research fellow.3 Our assistant research fellowship program seems to be working out well; Ryan has been a lot of help to us in working with Jessica to write up results (e.g., “Bias-Detecting Online Learners”), along with setting up TensorFlow tools for a project with Patrick LaVictoire.

We’ll likely be expanding the program this year and bringing on additional assistant research fellows, in addition to a slate of new research fellows.

Focusing on other activities that relate relatively directly to our technical research program, including collaborating and syncing up with researchers in industry and academia, in 2016 we:

On the whole, our research team growth in 2016 was somewhat slower than expected. We’re still accepting applicants for our type theorist position (and for other research roles at MIRI, via our Get Involved page), but we expect to leave that role unfilled for at least the next 6 months while we focus on onboarding additional core researchers.4

 

2016 General Activities

Also in 2016, we:

 

2016 Fundraising

2016 was a strong year in MIRI’s fundraising efforts. We raised a total of $2,285,200, a 44% increase on the $1,584,109 raised in 2015. This increase was largely driven by:

  • A general grant of $500,000 from the Open Philanthropy Project.5
  • A donation of $300,000 from Blake Borgeson.
  • Contributions of $93,548 from Raising for Effective Giving.6
  • A research grant of $83,309 from the Future of Life Institute.7
  • Our community’s strong turnout during our Fall Fundraiser—at $595,947, our second-largest fundraiser to date.
  • A gratifying show of support from supporters at the end of the year, despite our not running a Winter Fundraiser.

Assuming we can sustain this funding level going forward, this represents a preliminary fulfillment of our primary fundraising goal from January 2016:

Our next big push will be to close the gap between our new budget and our annual revenue. In order to sustain our current growth plans — which are aimed at expanding to a team of approximately ten full-time researchers — we’ll need to begin consistently taking in close to $2M per year by mid-2017.

As the graph below indicates, 2016 continued a positive trend of growth in our fundraising efforts.

Drawing conclusions from these year-by-year comparisons can be a little tricky. MIRI underwent significant organizational changes over this time span, particularly in 2013. We also switched to accrual-based accounting in 2014, which also complicates comparisons with previous years.

However, it is possible to highlight certain aspects of our progress in 2016:

  • The Fall Fundraiser: For the first time, we held a single fundraiser in 2016 instead of our “traditional” summer and winter fundraisers—from mid-September to October 31. While we didn’t hit our initial target of $750k, we hoped that our funders were waiting to give later in the year and would make up the shortfall at the end of year. We were pleased that they came through in large numbers at the end of 2016, some possibly motivated by public posts by members of the community.8 All told, we received more contributions in December 2016 (~$430,000) than in the same month in either of the previous two years, when we actively ran Winter Fundraisers, an interesting data point for us. The following charts throw additional light on our supporters’ response to the fall fundraiser:


    Note that if we remove the Open Philanthropy Project’s grant from the Pre-Fall data, the ratios across the 4 time segments all look pretty similar. Overall, this data is suggestive that, rather than a group of new funders coming in at the last moment, a segment of our existing funders chose to wait until the end of the year to donate.
  • In 2016 the remarkable support we received from returning funders was particularly noteworthy, with 89% retention (in terms of dollars) from 2015 funders. To put this in a broader context, the average gift retention rate across a representative segment of the US philanthropic space over the last 5 years has been 46%.
  • The number of unique funders to MIRI rose 16% in 2016—from 491 to 571—continuing a general increasing trend. 2014 is anomalously high on this graph due to the community’s active participation in our memorable SVGives campaign.9
  • International support continues to make up about 20% of contributions. Unlike in the US, where increases were driven mainly by new institutional support (the Open Philanthropy Project), international support growth was driven by individuals across Europe (notably Scandinavia and the UK), Australia, and Canada.
  • Use of employer matching programs increased by 17% year-on-year, with contributions of over $180,000 received through corporate matching programs in 2016, our highest to date. There are early signs of this growth continuing through 2017.
  • An analysis of contributions made from small, mid-sized, large, and very large funder segments shows contributions from all four segments increased proportionally from 2015:

Due to the fact that we raised more than $2 million in 2016, we are now required by California law to prepare an annual financial statement audited by an independent certified public accountant (CPA). That report, like our financial reports of past years, will be made available by the end of September, on our transparency and financials page.

 

Going Forward

As of July 2016, we had the following outstanding goals from mid-2015:

  1. Accelerated growth: “expand to a roughly ten-person core research team.” (source)
  2. Type theory in type theory project: “hire one or two type theorists to work on developing relevant tools full-time.” (source)
  3. Independent review: “We’re also looking into options for directly soliciting public feedback from independent researchers regarding our research agenda and early results.” (source)

We currently have seven research fellows and assistant fellows, and are planning to hire several more in the very near future. We expect to hit our ten-fellow goal in the next 3–4 months, and to continue to grow the research team later this year. As noted above, we’re delaying moving forward on a type theorist hire.

The Open Philanthropy Project is currently reviewing our research agenda as part of their process of evaluating us for future grants. They released an initial big-picture organizational review of MIRI in September, accompanied by reviews of several recent MIRI papers (which Nate responded to here). These reviews were generally quite critical of our work, with Open Phil expressing a number of reservations about our agent foundations agenda and our technical progress to date. We are optimistic, however, that we will be able to better make our case to Open Phil in discussions going forward, and generally converge more in our views of what open problems deserve the most attention.

In our August 2016 strategic update, Nate outlined our other organizational priorities and plans:

  1. Technical research: continue work on our agent foundations agenda while kicking off work on AAMLS.
  2. AGI alignment overviews: “Eliezer Yudkowsky and I will be splitting our time between working on these problems and doing expository writing. Eliezer is writing about alignment theory, while I’ll be writing about MIRI strategy and forecasting questions.”
  3. Academic outreach events: “To help promote our approach and grow the field, we intend to host more workshops aimed at diverse academic audiences. We’ll be hosting a machine learning workshop in the near future, and might run more events like CSRBAI going forward.”
  4. Paper-writing: “We also have a backlog of past technical results to write up, which we expect to be valuable for engaging more researchers in computer science, economics, mathematical logic, decision theory, and other areas.”

All of these are still priorities for us, though we now consider 5 somewhat more important (and 6 and 7 less important). We’ve since run three ML workshops, and have made more headway on our AAMLS research agenda. We now have a large amount of content prepared for our AGI alignment overviews, and are beginning a (likely rather long) editing process. We’ve also released “Logical Induction” and have a number of other papers in the pipeline.

We’ll be providing more details on how our priorities have changed since August in a strategic update post next month. As in past years, object-level technical research on the AI alignment problem will continue to be our top priority, although we’ll be undergoing a medium-sized shift in our research priorities and outreach plans.10

 

  1. See our previous reviews: 2015, 2014, 2013.
  2. From 2015 in review: “Patrick LaVictoire joined in March, Jessica Taylor in August, Andrew Critch in September, and Scott Garrabrant in December. With Nate transitioning to a non-research role, overall we grew from a three-person research team (Eliezer, Benya, and Nate) to a six-person team.”
  3. As I noted in our AMA: “At MIRI, research fellow is a full-time permanent position. A decent analogy in academia might be that research fellows are to assistant research fellows as full-time faculty are to post-docs. Assistant research fellowships are intended to be a more junior position with a fixed 1–2 year term.”
  4. In the interim, our research intern Jack Gallagher has continued to make useful contributions in this domain.
  5. Note that numbers in this section might not exactly match previously published estimates, since small corrections are often made to contributions data. Note also that these numbers do not include in-kind donations.
  6. This figure only counts direct contributions through REG to MIRI. REG/EAF’s support for MIRI is closer to $150,000 when accounting for contributions made through EAF, many made on REG’s advice.
  7. We were also awarded a $75,000 grant from the Center for Long-Term Cybersecurity to pursue a corrigibility project with Stuart Russell and a new UC Berkeley postdoc, but we weren’t able to fill the intended postdoc position in the relevant timeframe and the project was canceled. Stuart Russell subsequently received a large grant from the Open Philanthropy Project to launch a new academic research institute for studying corrigibility and other AI safety issues, the Center for Human-Compatible AI.
  8. We received timely donor recommendations from investment analyst Ben Hoskin, Future of Humanity Institute researcher Owen Cotton-Barratt, and Daniel Dewey and Nick Beckstead of the Open Philanthropy Project (echoed by 80,000 Hours).
  9. Our 45% retention of unique funders from 2015 is very much in line with funder retention across the US philanthropic space, which combined with the previous point, suggests returning MIRI funders were significantly more supportive than most.
  10. My thanks to Rob Bensinger, Colm Ó Riain, and Matthew Graves for their substantial contributions to this post.

The post 2016 in review appeared first on Machine Intelligence Research Institute.

Musk launches company to pursue ‘neural lace’ brain-interface technology

KurzweilAI - Tue, 03/28/2017 - 03:25

image credit | Bloomberg

Elon Musk has launched a California-based company called Neuralink Corp., The Wall Street Journal reported today (Monday, March 27, 2017), citing people familiar with the matter, to pursue “neural lace” brain-interface technology.

Neural lace would help prevent humans from becoming “house cats” to AI, he suggests. “I think one of the solutions that seems maybe the best is to add an AI layer,” Musk hinted at the Code Conference last year. It would be a “digital layer above the cortex that could work well and symbiotically with you.

“We are already a cyborg,” he added. “You have a digital version of yourself online in form of emails and social media. … But the constraint is input/output — we’re I/O bound … particularly output. … Merging with digital intelligence revolves around … some sort of interface with your cortical neurons.”

Reflecting concepts that have been proposed by Ray Kurzweil, “over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk said at the recent World Government Summit in Dubai.

Musk suggested the neural lace interface could be inserted via veins and arteries.

Image showing mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution. (credit: Lieber Research Group, Harvard University)

KurzweilAI reported on one approach to a neural-lace-like brain interface in 2015. A “syringe-injectable electronics” concept was invented by researchers in Charles Lieber’s lab at Harvard University and the National Center for Nanoscience and Technology in Beijing. It would involve injecting a biocompatible polymer scaffold mesh with attached microelectronic devices into the brain via syringe.

The process for fabricating the scaffold is similar to that used to etch microchips, and begins with a dissolvable layer deposited on a biocompatible nanoscale polymer mesh substrate, with embedded nanowires, transistors, and other microelectronic devices attached. The mesh is then tightly rolled up, allowing it to be sucked up into a syringe via a thin (100 micrometers internal diameter) glass needle. The mesh can then be injected into brain tissue by the syringe.

The input-output connection of the mesh electronics can be connected to standard electronics devices (for voltage insertion or measurement, for example), allowing the mesh-embedded devices to be individually addressed and used to precisely stimulate or record individual neural activity.

A schematic showing in vivo stereotaxic injection of mesh electronics into a mouse brain (credit: Jia Liu et al./Nature Nanotechnology)

Lieber’s team has demonstrated this in live mice and verified continuous monitoring and recordings of brain signals on 16 channels. “We have shown that mesh electronics with widths more than 30 times the needle ID can be injected and maintain a high yield of active electronic devices … little chronic immunoreactivity,” the researchers said in a June 8, 2015 paper in Nature Nanotechnology. “In the future, our new approach and results could be extended in several directions, including the incorporation of multifunctional electronic devices and/or wireless interfaces to further increase the complexity of the injected electronics.”

This technology would require surgery, but would not have the accessibility limitation of the blood-brain barrier with Musk’s preliminary concept. For direct delivery via the bloodstream, it’s possible that the nanorobots conceived by Robert A. Freitas, Jr. (and extended to interface with the cloud, as Ray Kurzweil has suggested) might be appropriate at some point in the future.

“Neuralink has reportedly already hired several high profile academics in the field of neuroscience: flexible electrodes and nano technology expert  Venessa Tolosa, PhD; UCSF professor Philip Sabes, PhD, who also participated in the Musk-sponsored Beneficial AI conference; and Boston University professor Timothy Gardner, PhD, who studies neural pathways in the brains of songbirds,” Engadget reports.

UPDATE Mar. 28, 2017:

 


Recode | We are already cyborgs | Elon Musk | Code Conference 2016

Travelers to Mars risk leukemia cancer, weakened immune function from radiation, NASA-funded study finds

KurzweilAI - Mon, 03/27/2017 - 08:28

The spleen from a mouse exposed to a mission-relevant dose (20 cGy, 1 GeV/n) of iron ions (bottom) was ~ 30 times the normal volume compared with the spleen from a control mouse (top). (credit: C Rodman et al./Leukemia)

Radiation encountered in deep space travel may increase the risk of leukemia cancer in humans traveling to Mars, NASA-funded researchers at the Wake Forest Institute for Regenerative Medicine and colleagues have found, using mice transplanted with human stem cells.

“Our results are troubling because they show radiation exposure could potentially increase the risk of leukemia,” said Christopher Porada, Ph.D., associate professor of regenerative medicine and senior researcher on the project.

Radiation exposure is believed to be one of the most dangerous aspects of traveling to Mars, according to NASA. The average distance to Mars is 140 million miles, and a round trip could take three years.

The goal of the study, published in the journal Leukemia, was to assess the direct effects of simulated solar energetic particles (SEP) and galactic cosmic ray (GCR) radiation on human hematopoietic stem cells (HSCs). These stem cells comprise less than 0.1% of the bone marrow of adults, but produce the many types of blood cells that circulate through the body and work to transport oxygen, fight infection, and eliminate any malignant cells that arise.

For the study, human HSCs from healthy donors of typical astronaut age (30–55 years) were exposed to Mars mission-relevant doses of protons and iron ions — the same types of radiation that astronauts would be exposed to in deep space, followed by laboratory and animal studies to define the impact of the exposure.

“Radiation exposure at these levels was highly deleterious to HSC function, reducing their ability to produce almost all types of blood cells, often by 60–80 percent,” said Porada. “This could translate into a severely weakened immune system and anemia during prolonged missions in deep space.”

The radiation also caused mutations in genes involved in the hematopoietic process and dramatically reduced the ability of HSCs to give rise to mature blood cells.

Previous studies had already demonstrated that exposure to high doses of radiation, such as from X-rays, can have harmful (even life-threatening) effects on the body’s ability to make blood cells, and can significantly increase the likelihood of cancers, especially leukemias. However, the current study was the first to show a damaging effect of lower, mission-relevant doses of space radiation.

Mice develop T-cell acute lymphoblastic leukemia, weakened immune function

The next step was to assess how the cells would function in the human body. For that purpose, mice were transplanted with GCR-irradiated human HSCs, essentially “humanizing” the animals. The mice developed what appeared to be T-cell acute lymphoblastic leukemia — the first demonstration that exposure to space radiation may increase the risk of leukemia in humans.

“Our results show radiation exposure could potentially increase the risk of leukemia in two ways,” said Porada. “We found that genetic damage to HSCs directly led to leukemia. Secondly, radiation also altered the ability of HSCs to generate T and B cells, types of white blood cells involved in fighting foreign ‘invaders’ like infections or tumor cells. This may reduce the ability of the astronaut’s immune system to eliminate malignant cells that arise as a result of radiation-induced mutations.”

Porada said the findings are particularly troubling given previous work showing that conditions of weightlessness/microgravity present during spaceflight can also cause marked alterations in astronaut’s immune function, even after short duration missions in low-earth orbit, where they are largely protected from cosmic radiation.

Taken together, the results indicate that the combined exposure to microgravity and SEP/GCR radiation that would occur during extended deep space missions, such as to Mars, could potentially exacerbate the risk of immune-dysfunction and cancer,

NASA’s Human Research Program is also exploring conditions of microgravity, isolation and confinement, hostile and closed environments, and distance from Earth. The ultimate goal of the research is to make space missions as safe as possible.

Researchers at Wake Forest Baptist Medical Center, Brookhaven National Laboratory, and the University of California Davis Comprehensive Cancer Center were also involved in the study.

Abstract of In vitro and in vivo assessment of direct effects of simulated solar and galactic cosmic radiation on human hematopoietic stem/progenitor cells

Future deep space missions to Mars and near-Earth asteroids will expose astronauts to chronic solar energetic particles (SEP) and galactic cosmic ray (GCR) radiation, and likely one or more solar particle events (SPEs). Given the inherent radiosensitivity of hematopoietic cells and short latency period of leukemias, space radiation-induced hematopoietic damage poses a particular threat to astronauts on extended missions. We show that exposing human hematopoietic stem/progenitor cells (HSC) to extended mission-relevant doses of accelerated high-energy protons and iron ions leads to the following: (1) introduces mutations that are frequently located within genes involved in hematopoiesis and are distinct from those induced by γ-radiation; (2) markedly reduces in vitro colony formation; (3) markedly alters engraftment and lineage commitment in vivo; and (4) leads to the development, in vivo, of what appears to be T-ALL. Sequential exposure to protons and iron ions (as typically occurs in deep space) proved far more deleterious to HSC genome integrity and function than either particle species alone. Our results represent a critical step for more accurately estimating risks to the human hematopoietic system from space radiation, identifying and better defining molecular mechanisms by which space radiation impairs hematopoiesis and induces leukemogenesis, as well as for developing appropriately targeted countermeasures.

Scientists reverse aging in mice by repairing damaged DNA

KurzweilAI - Mon, 03/27/2017 - 02:55

A research team led by Harvard Medical School professor of genetics David Sinclair, PhD, has made a discovery that could lead to a revolutionary new drug that allows cells to repair DNA damaged by aging, cancer, and radiation.

In a paper published in the journal Science on Friday (March 24), the scientists identified a critical step in the molecular process related to DNA damage.

The researchers found that a compound known as NAD (nicotinamide adenine dinucleotide), which is naturally present in every cell of our body, has a key role as a regulator in protein-to-protein interactions that control DNA repair. In an experiment, they found that treating mice with a NAD+ precursor called NMN (nicotinamide mononucleotide) improved their cells’ ability to repair DNA damage.

“The cells of the old mice were indistinguishable from the young mice, after just one week of treatment,” said senior author Sinclair.

Disarming a rogue agent: When the NAD molecule (red) binds to the DBC1 protein (beige), it prevents DBC1 from attaching to and incapacitating a protein (PARP1) that is critical for DNA repair. (credit: David Sinclair)

Human trials of NMN therapy will begin within the next few months to “see if these results translate to people,” he said. A safe and effective anti-aging drug is “perhaps only three to five years away from being on the market if the trials go well.”

What it means for astronauts, childhood cancer survivors, and the rest of us

The researchers say that in addition to reversing aging, the DNA-repair research has attracted the attention of NASA. The treatment could help deal with radiation damage to astronauts in its Mars mission, which could cause muscle weakness, memory loss, and other symptoms (see “Mars-bound astronauts face brain damage from galactic cosmic ray exposure, says NASA-funded study“), and more seriously, leukemia cancer and weakened immune function (see “Travelers to Mars risk leukemia cancer, weakend immune function from radiation, NASA-funded study finds“).

The treatment could also help travelers aboard aircraft flying across the poles. A 2011 NASA study showed that passengers on polar flights receive about 12 percent of the annual radiation limit recommended by the International Committee on Radiological Protection.

The other group that could benefit from this work is survivors of childhood cancers, who are likely to suffer a chronic illness by age 45, leading to accelerated aging, including cardiovascular disease, Type 2 diabetes, Alzheimer’s disease, and cancers unrelated to the original cancer, the researchers noted.

For the past four years, Sinclair’s team has been working with spinoff MetroBiotech on developing NMN as a drug. Sinclair previously made a link between the anti-aging enzyme SIRT1 and resveratrol. “While resveratrol activates SIRT1 alone, NAD boosters [like NMN] activate all seven sirtuins, SIRT1-7, and should have an even greater impact on health and longevity,” he says.

Sinclair is also a professor at the University of New South Wales School of Medicine in Sydney, Australia.

Abstract of A conserved NAD+ binding pocket that regulates protein-protein interactions during aging

DNA repair is essential for life, yet its efficiency declines with age for reasons that are unclear. Numerous proteins possess Nudix homology domains (NHDs) that have no known function. We show that NHDs are NAD+ (oxidized form of nicotinamide adenine dinucleotide) binding domains that regulate protein-protein interactions. The binding of NAD+ to the NHD domain of DBC1 (deleted in breast cancer 1) prevents it from inhibiting PARP1 [poly(adenosine diphosphate–ribose) polymerase], a critical DNA repair protein. As mice age and NAD+ concentrations decline, DBC1 is increasingly bound to PARP1, causing DNA damage to accumulate, a process rapidly reversed by restoring the abundance of NAD+. Thus, NAD+ directly regulates protein-protein interactions, the modulation of which may protect against cancer, radiation, and aging.

A printable, sensor-laden ‘skin’ for robots (or an airplane)

KurzweilAI - Fri, 03/24/2017 - 22:59

Illustration of 3D-printed sensory composite (credit: Subramanian Sundaram)

MIT researchers have designed a radical new method of creating flexible, printable electronics that combine sensors and processing circuitry.

Covering a robot — or an airplane or a bridge, for example — with sensors will require a technology that is both flexible and cost-effective to manufacture in bulk. To demonstrate the feasibility of their new method, the researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed and built a 3D-printed device that responds to mechanical stresses by changing the color of a spot on its surface.

Sensorimotor pathways

“In nature, networks of sensors and interconnects [such as the human nervous system] are called sensorimotor pathways,” says Subramanian Sundaram, an MIT graduate student in electrical engineering and computer science (EECS), who led the project. “We were trying to see whether we could replicate sensorimotor pathways inside a 3-D-printed object. So we considered the simplest organism we could find” — the golden tortoise beetle, or “goldbug,” an insect whose exterior usually appears golden but turns reddish orange if the insect is poked or prodded, that is, mechanically stressed.

The researchers present their new design in the latest issue of the journal Advanced Materials Technologies.

The key innovation was to 3D-print directly on the plastic substrate (support structure) instead of placing components on top. That greatly increases the range of devices that can be created; a printed substrate could consist of many materials, interlocked in intricate but regular patterns, which broadens the range of functional materials that printable electronics can use.*

Printed substrates also open the possibility of devices that, although printed as flat sheets, can fold themselves up into more complex, three-dimensional shapes. Printable robots that spontaneously self-assemble when heated, for instance (see “Self-assembling printable robotic components“), are a  topic of ongoing research at the CSAIL Distributed Robotics Laboratory, led by Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.

3D-printed sensory composite

The sensory composite is grouped into 4 sets of functional layers: a base with spatially varying mechanical stiffness and surface energy, electrical materials, electrolyte, and capping layers. All these materials are 3D-printed. (credit: Subramanian Sundaram et al./ Advanced Materials Technologies)

The MIT researchers’ new device is approximately T-shaped, but with a wide, squat base and an elongated crossbar. The crossbar is made from an elastic plastic, with a strip of silver running its length; in the researchers’ experiments, electrodes were connected to the crossbar’s ends. The base of the T is made from a more rigid plastic. It includes two printed transistors and what the researchers call a “pixel,” a circle of semiconducting polymer whose color changes when the crossbars stretch, modifying the electrical resistance of the silver strip.**

A transistor consists of semiconductor channel on top of which sits a “gate,” a metal wire that, when charged, generates an electric field that switches the semiconductor between its electrically conductive and nonconductive states. In a standard transistor, there’s an insulator between the gate and the semiconductor, to prevent the gate current from leaking into the semiconductor channel.

The transistors in the MIT researchers’ device instead separate the gate and the semiconductor with an electrolyte — a layer of water containing  potassium chloride mixed with glycerol. Charging the gate drives potassium ions into the semiconductor, changing its conductivity.***

Photograph of the fully 3D-printed sensory composite shows a strain sensor (top) linked to an electrical amplifier that modulates the transparency of the electrochromic pixel (scale bar is 10mm). (credit: Subramanian Sundaram et al./ Advanced Materials Technologies)

“I am very impressed with both the concept and the realization of the system,” says Hagen Klauk, who leads the Organic Electronic Research Group at the Max Planck Institute for Solid State Research, in Stuttgart, Germany. “The approach of printing an entire optoelectronic system — including the substrate and all the components — by depositing all the materials, including solids and liquids, by 3-D printing is certainly novel, interesting, and useful, and the demonstration of the functional system confirms that the approach is also doable. By fabricating the substrate on the fly, the approach is particularly useful for improvised manufacturing environments where dedicated substrate materials may not be available.”

The work was supported by the DARPA SIMPLEX program through SPAWAR.

* To build the device, the researchers used the MultiFab, a custom 3-D printer developed MIT. The MultiFab already included two different “print heads,” one for emitting hot materials and one for cool, and an array of ultraviolet light-emitting diodes. Using ultraviolet radiation to “cure” fluids deposited by the print heads produces the device’s substrate.

** Sundaram added a copper-and-ceramic heater, which was necessary to deposit the semiconducting plastic: The plastic is suspended in a fluid that’s sprayed onto the device surface, and the heater evaporates the fluid, leaving behind a layer of plastic only 200 nanometers thick. The layer of saltwater lowers the device’s operational voltage, so that it can be powered with an ordinary 1.5-volt battery.

*** But it does render the device less durable. “I think we can probably get it to work stably for two months, maybe,” Sundaram says. “One option is to replace that liquid with something between a solid and a liquid, like a hydrogel, perhaps. But that’s something we would work on later. This is an initial demonstration.”

Abstract of 3D-Printed Autonomous Sensory Composites

A method for 3D-printing autonomous sensory composites requiring no external processing is presented. The composite operates at 1.5 V, locally performs active signal transduction with embedded electrical gain, and responds to stimuli, reversibly transducing mechanical strain into a transparency change. Digital assembly of spatially tailored solids and thin films, with encapsulated liquids, provides a route for realizing complex autonomous systems.

Mayo Clinic discovers high-intensity aerobic training can reverse aging

KurzweilAI - Fri, 03/24/2017 - 07:42

Mayo Clinic study finds high-intensity aerobic exercise may reverse aging (credit: Flickr user Global Panorama via Creative Commons license)

A Mayo Clinic study says the best training for adults is high-intensity aerobic exercise, which they believe can reverse some cellular aspects of aging.

Mayo researchers compared 12 weeks of high-intensity interval training (workouts in which you alternate periods of high-intensity exercise with low-intensity recovery periods), resistance training, and combined training. While all three enhanced insulin sensitivity and lean mass, only high-intensity interval training and combined training improved aerobic capacity and skeletal muscle mitochondrial respiration. (Decline in mitochondrial content and function are common in older adults.)

High-intensity intervals also improved muscle protein content, which enhanced energetic functions and also caused muscle enlargement, especially in older adults. The researchers said exercise training significantly enhanced the cellular machinery responsible for making new proteins. That contributes to protein synthesis, thus reversing a major adverse effect of aging.

12 weeks exercise training in younger and older people (credit: Mayo Clinic)

“We encourage everyone to exercise regularly, but the take-home message for aging adults is that supervised high-intensity training is probably best, because, both metabolically and at the molecular level, it confers the most benefits,” says K. Sreekumaran Nair, M.D., Ph.D., a Mayo Clinic endocrinologist and senior researcher on the study.

He says the high-intensity training reversed some manifestations of aging in the body’s protein function, but noted that increasing muscle strength requires resistance training a couple of days a week.

Other findings

In the study, researchers tracked metabolic and molecular changes in a group of young and older adults over 12 weeks, gathering data 72 hours after individuals in randomized groups completed each type of exercise. General findings showed:

  • Cardio respiratory health, muscle mass, and insulin sensitivity improved with all training.
  • Mitochondrial cellular function declined with age but improved with training.
  • Increase in muscle strength occurred only modestly with high-intensity interval training, but occurred with resistance training alone or when added to the aerobic training.
  • Exercise improves skeletal muscle gene expression independent of age.
  • Exercise substantially enhanced the ribosomal proteins responsible for synthesizing new proteins, which is mainly responsible for enhanced mitochondrial function.
  • Training has no significant effect on skeletal muscle DNA epigenetic changes but promotes skeletal muscle protein expression with maximum effect in older adults.

The research findings appear in Cell Metabolism. The research was supported by the National Institutes of Health, Mayo Clinic, the Robert and Arlene Kogod Center on Aging, and the Murdock-Dole Professorship.

Abstract of Enhanced Protein Translation Underlies Improved Metabolic and Physical Adaptations to Different Exercise Training Modes in Young and Old Humans

The molecular transducers of benefits from different exercise modalities remain incompletely defined. Here we report that 12 weeks of high-intensity aerobic interval (HIIT), resistance (RT), and combined exercise training enhanced insulin sensitivity and lean mass, but only HIIT and combined training improved aerobic capacity and skeletal muscle mitochondrial respiration. HIIT revealed a more robust increase in gene transcripts than other exercise modalities, particularly in older adults, although little overlap with corresponding individual protein abundance was noted. HIIT reversed many age-related differences in the proteome, particularly of mitochondrial proteins in concert with increased mitochondrial protein synthesis. Both RT and HIIT enhanced proteins involved in translational machinery irrespective of age. Only small changes of methylation of DNA promoter regions were observed. We provide evidence for predominant exercise regulation at the translational level, enhancing translational capacity and proteome abundance to explain phenotypic gains in muscle mitochondrial function and hypertrophy in all ages.

Syndicate content