This is Paul Torek's Typepad Profile.
Join Typepad and start following Paul Torek's activity
Join Now!
Already a member? Sign In
Paul Torek
Ann Arbor, Michigan
Engineer doing metals recycling, with Philosophy PhD
Recent Activity
_Women's Rights in the Middle East and North Africa_, p. 164: Iraqi ... courts draw many of their rules from Shari'a, which requires two female witnesses for their testimony to be considered, whereas a man can stand as a sole witness. Of course, bias doesn't have to be explicit like that. But that really struck me when I read it, or something like it, in the news. I googled it and this came up.
Toggle Commented Feb 8, 2017 on 4. The reactive attitudes at Flickers of Freedom
I like how you view a person over time, not just at an instant, as a locus of responsibility. This is congruent with your emphasis on the social conditions of responsibility. There are certain qualities which become visible only with a sufficiently broad purview - the wetness of water being a classic example, where a single molecule won't suffice.
A further thought, following P. Christian Adamski, about how Everett is portrayed in popular media and sci-fi: the division between "worlds" in those depictions is much cleaner and more stark than anything a physicist might countenance. Just reading the Wiki article on decoherence ( https://en.wikipedia.org/wiki/Quantum_decoherence ) convinced me that Everett was much more elegant and credible than its sci-fi imitations.
Thomas, I suggest putting Marino & Tamburrini at the top of your reading list. From their abstract, it sounds like their issues are the ones I would urge. Modern AI systems are all heavy with machine learning, which makes the programmer less and less relevant as the machine is trained up. Focusing on the programmer will take your eye away from where most of the action is. A good starting point might be https://en.wikipedia.org/wiki/Supervised_learning Danaher raises an interesting point (maybe this is not what he meant...). I expect robots to be full blooded agents in a century or so, but unlike humans, mammals, or birds, they might not care much about any "punishment" you can throw at them. That could create a conundrum.
Toggle Commented Jan 9, 2017 on Help from the Hive Mind at Flickers of Freedom
On "Believing in free will": it mentions the Vohs research on belief and cheating, but that research partially-failed a replication attempt: https://osf.io/i29mh/ By partial failure, I mean the effect size was much smaller in the replication. I don't mean to cast any doubts on Seto's study. The bioengineered future, on the other hand, definitely merits some doubts. "The life sciences, he announces, consider that all living things (including humans) are just algorithms." Um, no: at least not by the usual, Turing machine based, definition of algorithm. Turing machines are discrete; living things are generally analog systems. The Schrodinger equation uses real-numbered values for t (time) and x (position). It takes careful engineering to build something out of physical particles that is not an analog system, and not highly sensitive to minute variations in the values of some parameters. But the real problem with that statement isn't the "algorithms". (Perhaps the Turing machine definition isn't the best.) It's the "just". What's to belittle about algorithms, if they can do all this? As for turning over voting to a machine, I can already bring my cell phone into the voting booth. I could rely on it to relay verdicts from a larger computer at work or home, if I wanted to. Key words: if I wanted to. No changes to voting rights are necessary or desirable. If computer algorithms gain the prophesied abilities, people can take advantage of them. A more substantial crisis for democracy comes from something the article doesn't discuss. If people can "upload" to computers, they can also create many copies of themselves. Denying these new persons any voting rights seems wrong, but so does allowing a few people to become a majority voting bloc because they're rich enough to buy 51% of all computing resources. Luckily, this paragraph contains an "if", which I suspect won't come true. Unluckily, the reason is that we'll probably face a bigger crisis first. The best thing so far written bearing on futurism, IMO, is http://slatestarcodex.com/2014/07/30/meditations-on-moloch/ The bearing is mostly indirect, but powerful nonetheless.
Toggle Commented Sep 10, 2016 on Two Pieces on Free Will at Flickers of Freedom
This is a very impressive post, and series. I just wanted to agree and emphasize that "these capacities are more ‘extended’ than classic accounts imply." I don't think this is a criticism of classic accounts, but rather an extended development of them. I take you to be showing part of what reasons-responsiveness (for example) *is*.
Hi Suzy, Would - I don't know if there's already another term - autonomous accomplishment be included in your single concept? Around 18-24 months of age, a child will often start saying "No! *I* do it!" when a parent "helpfully" contributes to making a tower of blocks. From then on, most people attribute a personal value to autonomy. This seems like an important dimension to try to capture.
Toggle Commented Aug 11, 2016 on A Unified Theory of Autonomy? at Flickers of Freedom
Oh, I misunderstood about the dropped constant (but I feel like it should be kept, to smooth the segue into relativity). Sure, there are objective facts about the complexity of a laws+conditions system relative to any given UTM, and moreover the complexity measures are strongly related, those points seem straightforward.
Toggle Commented Aug 11, 2016 on Computation, Laws and Supervenience at Tomkow.com
1 reply
Ah, thank you, "what this is supposed to be" is clear now. I should have read the previous post, although now that I did, I think I'm having a Far Side moment (Mr. Osbourne, may I be excused? My brain is full.) I think you can live with some relativity. Suppose there are two Turing Machines one of which optimizes complexity with a few more "initial conditions", and the other of which has a few more laws. Still, they make the same predictions, and it just seems intuitively likely that most laws in one system would have rough or exact counterparts in the other. I'm not convinced by the argument in the other post for dropping the O(1) constant, though. You made an enumeration of Turing machine specifications, written as data on tape, to be computed by a given universal Turing machine (UTM) U. You then pointed out that another UTM U' could simulate machine T(i) when just given the number i. But had you started from the different UTM V, the complexity order of Turing machines (as data on tape) would typically differ (by at most a constant, I guess), and what we used to call T(i) would now have a different number j.
Toggle Commented Aug 11, 2016 on Computation, Laws and Supervenience at Tomkow.com
1 reply
Terrance, I only just now discovered this excellent post. I have two questions. First, why is Nomological Relativity not relevant to the issue at hand, or in other words what *is* the issue at hand? Second, is this supposed to be an ontological reduction of laws, or simply a set of logical implications to and from law-statements from/to other statements?
Toggle Commented Aug 10, 2016 on Computation, Laws and Supervenience at Tomkow.com
1 reply
Paul Torek is now following Sprachlogik.blogspot.com
Aug 8, 2016
Addendum to my last comment: here are a few ways the Concrete cases could militate against subjects' explicit theories. (1) People in this deterministic universe still look ahead to consequences of actions, instead of being pushed from behind. (2) People ponder their perceived desires and wonder whether to go with the flow or against it (Angra's point, roughly?). (3) People's decision making is not bypassed but plays a vital causal role (Eddy's theory). To the extent the Concrete cases state, show, or suggest any of these features, that could explain the concrete/abstract differences.
For once I agree with everyone! Very well thought out posts, that really energized this blog. Thank you Josh.
Toggle Commented Jul 22, 2016 on Thanks! at Flickers of Freedom
Joshua, Good question. I suspect that in Abstract cases, people who have an explicit Agent Causation (AC) theory simply deploy it. But they can't easily do that in Concrete cases because the description of the case militates against the explicit theory. The subject is forced to consider an agent who seems quite like us, and who lives in a deterministic universe, and the subject finds the notion surprisingly easy to accept. Your survey is somewhat turning them into compatibilists-in-principle, although they may still believe that in *our* universe agents act contra-causally. Compare a person who goes into the movie Blade Runner thinking that artificial beings could never be conscious, but comes out thinking differently.
Joshua, That's very plausible, and it explains all the data that I'm aware of on concrete/abstract effects. Which is less than the data you're aware of. But I want to put my competing hypothesis out there, which can also explain all the data I'm aware of. On this lighter-folk-commitment hypothesis, the folks' automatic implicit understanding of agency has it that the agent can resist mental tendencies. (Note that in everyday observation, only mental tendencies, not physical traits, seem relevant to explaining decisions.) In this, "the agent" is unspecified other than, by implication, it has to go beyond simply mental tendencies. It could be dualistic or physical, indeterministic or deterministic. Which is not to say that the folks will find all these possibilities equally believable. Many of them will have explicit theories saying that agents are dualistic and contra-causal. Even those who don't, may find a physicalistic view over-complex and far-fetched. But as far as the automatic implicit understanding of agency goes, "the agent" is mostly a blank check, to be filled in later. This is just a variant of Eddy's "theory-lite" approach, as far as I can tell. But if not, I'll take the blame for it. It's the same old thing I've been getting at for three threads now - but hopefully, new aspects are showing up.
Chandra, Just to clarify, I'm with Dickinson Miller (aka Hobart) - the real one, not Wikipedia's version - on the undesirability of chance. A little bit of chance, however, is tolerable. And the chance in question is chaos, not necessarily true indeterminism. Psychological determinism is hard to reconcile with neurology, but physical determinism seems untouched, given a suitable interpretation of QM.
Matt, Angra, Joshua, Eddy, Chandra - Your intriguing discussion brings me back to something I wrote in comments on the last post, about mental causes' insufficiency. So suppose I, like Joshua's steak-hungry woman, also crave steak. This mental state makes it likely that I will eat steak. But I resist, and my resistance cannot be explained by another mental state in any reasonably narrow sense of "mental state" ("state that explains later actions" does not count as reasonably narrow). Is all this consistent with the denial of contra-causal agency? Yes, and moreover, it's probably at least sometimes true. Peter Tse gave us plenty of evidence that the brain relies on chaotic processes, wherein subtle differences in timing and frequency of neural spikes can lead to dramatically different decisions and actions. It would be implausible to suppose that introspective and intuitive classifications of mental states ("hungry for steak", "not hungry for steak") always line up with these subtle physical differences. So, probably, mental states are insufficient to cause specific behaviors, at least some of the time. The full cause requires specification of physical states. Now, Peter holds that these physical states behave indeterministically, and lots of physicists agree; but that part is dispensable (and disputed). If we remain agnostic about determinism (as any non-physics-expert, at least, should) then we have to leave open the possibility of determinism without psychological-determinism. But when I resist, is it really *I* that resist? Arguably, yes. The physical events involved are part of the normal, proper functioning of the decision-making brain. I have good reasons why I resisted the urge, even though someone with information only about those reasons and other (mental) motives would rationally bet against my resistance. So, I claim, the evidence *favors* the man on the Clapham omnibus's claim that mental states "flow" in a certain direction but that he, himself, can and sometimes does resist. I now posit that most folks have the *theory* that the "I" that resists is a dualistic, contra-causal intervener. The theory is simple and elegant, at least if one doesn't know much neurology. It explains why mental states aren't sufficient (but as I've just argued, it's not the only explanation possible). It's taught in Sunday school, and implicit in movies like All Of Me, Heaven Can Wait, etc., etc. They may hold this in a Theory-Lite fashion a la Eddy, but regardless, it can explain why they think certain actions couldn't happen in a deterministic world.
Joshua, Thanks for that clarification; let's focus on whether mental states cause decisions. Let's further suppose that "cause" here means "deterministically cause". It seems compatible with determinism that mental states don't cause decisions. Mental states are not sufficiently specific. They only probabilify decisions, not "cause" them in this strict sense.
To clarify my last post, I'm wondering about how people use the *word* "cause"; I'm not so much wondering about how they reason about (what most philosophers would call) causation. I found this nice compilation, which unfortunately doesn't look promising for an answer to my question: http://experimental-philosophy.yale.edu/xphipage/Experimental%20Philosophy-Causation.html
Joshua, Your hypothesis (3) makes me wonder: have X-Phi'ers investigated whether the folk conception of "cause" is deterministic? Would lots of the folk regard "probabilistic causation" as an utterly novel idea (or worse, an outright contradiction)? Supposing that they would, my next question is, do the folk also reject the idea that *decision* can cause action? Because to the casual observer, decision is the only mental state that highly reliably leads to a specific action, i.e. the one that was decided on. Masochists steer toward pain, not away. Ascetics avoid pleasure. Macho men confront the things they fear. Etc.
I agree with James, David, and Joshua all at the same time. For the dimensionless point, I think we see no evidence of it in these experiments, but in other experiments I bet you would. For example, show your X-Phi subjects some of the personal identity thought experiments that David Wiggins presents, together with some of the thought experiments John Locke presents. I bet you dollars to donuts that plenty of subjects will turn to a dimensionless point theory of personal identity. I could not agree more strongly with David's point that people's identity-thinking is driven by "specific practical concern(s)." Of course, that doesn't mean that all the concerns can fit comfortably into one concept, nor on the other hand does it mean that people will recognize that and stop trying to cram them all into one single "identity". This, I suggest, is part of the problem with folks' (not necessarily explicit) philosophical thinking about identity. Among those practical concerns, there are plenty of dimensions where normative/evaluative considerations apply. So I think this is compatible with Joshua's folk-essentialism. I don't get why it's a problem that artifacts like scientific papers have different virtues, hence different essences. Of course they do, because we have different sets of practical concerns for papers, versus people. I must be missing something ... ? I don't go in for dimensionless points. So I think we'll have to find identity among some of the concerns David mentions. And we'll have to admit (with Parfit, e.g.) that some concerns go beyond identity, and are only loosely related to it. This will be a project of repair and reconstruction, not a simple description of folk intuitions.
Angra, I think we could use "flourishing" to include self-sacrifice where the alternative is, per your example, death of one's loved ones. And sure enough, that makes it just as mysterious as true self. But when we acknowledge both biological and cultural, both physical and mental, aspects of the self, I think we generally know how to recognize advances in true-self-understanding - even though it often can be quite difficult to decide prospectively, and even though philosophical metacognition on this topic is much harder.
Tamler, Bingo: "it is an example of someone not flourishing." I want to expand on that, because I think it provides a way to understand the "true self" that doesn't necessarily involve any metaphysical heavy lifting. It also provides a potential underpinning for Angra's "ideal reflection" theory, viz., ideal reflection would lead me to a certain way of life because (or in part because) that's how I can flourish. To use Joshua's terms from the previous post, a person isn't just a set of mental states. She's a living organism. Obviously her mental states must to some degree reflect her organic nature, but that leaves a lot of room for an independent reality of "true self" that the mind may not always accurately capture. Now a human being is a social animal, so that points to an obvious important role for interpersonal morality. But it also, apparently, leaves room for the idea that someone's true self may be called to wronging others. So, while this discovery of evaluatively laden "true self" thinking may require some philosophers to revise their overly mentalistic theories of appropriately agent-sourced decision making (or else reject large sets of intuitions), I do think it can be reconciled with a relatively modest metaphysics of the person.
Not so nebulous though, upon reading the (excellent) linked papers. I didn't need much convincing, I guess, that people use these two viewpoints (moralized essence, and featureless-point). The real action, in my view, comes in how to compare/reconcile these viewpoints to the ontologies widely accepted in philosophy.
Toggle Commented Jul 4, 2016 on Agency and the Self at Flickers of Freedom
Greg, You're right of course that if we actually have both compatibilist freedom and an additional incompatibilist kind that "has some value above and beyond", then the Free Will Defense stands. But the "actually have both" is important. A mere logical possibility isn't good enough, just as a mere logical possibility that determinism could be false (and who wouldn't grant that?) wouldn't help the FWD if in the actual world determinism is true. So in this context, please take "compatibilism" in my comments above to mean "compatibilism about all the types of freedom that we actually have."