Monday, January 30, 2017

David Livingstone Smith: The Politics of Salvation: Ideology, Propaganda, and Race in Trump's America

David Livingstone Smith's talk at UC Riverside, Jan 19, 2017:

Introduction by Milagros Pena, Dean of UCR's College of Humanities, Arts, and Social Sciences. Panel discussants are Jennifer Merolla (Political Science, UCR), Armando Navarro (Ethnic Studies, UCR), and me. After the Dean's remarks, David's talk is about 45 minutes, then about 5-10 minutes for each discussant, then open discusson with the audience for the remainder of the three hours, moderated by David Glidden (Philosophy, UCR).

Smith outlines Roger Money-Kyrle's theory of propaganda -- drawn from observing Hitler's speeches. On Money-Kyrle's view propaganda involves three stages: (1) induce depression, (2) induce paranoia, and (3) offer salvation. Smith argues that Trump's speeches follow this same pattern.

Smith also argues for a "teleofunctional" notion of ideological beliefs as beliefs that have the function of promoting oppression in the sense that those beliefs have proliferated because they promote oppression. On this view, beliefs are ideological, or not, depending on their social or cultural lineage. One's own personal reasons for adopting those beliefs are irrelevant to the question of whether they are ideological. In the case of Trump in particular, Smith argues, regardless of why he embraces the beliefs he does, or what his personal motives are, if his beliefs are beliefs with the cultural-historical function of promoting oppression, they are ideological.

Friday, January 27, 2017

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Yesterday Stephen Bannon, one of Trump's closest advisors, called the media "the opposition party". My op-ed piece in today's Los Angeles Times is my response to that type of thinking.

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Does democracy require journalists and educators to strive for political balance? I’m hardly alone in thinking the answer is "yes." But it also requires them to present the facts as they understand them — and when it is not possible to be factual and balanced at the same time, democratic institutions risk collapse.

Consider the problem abstractly. Democracy X is dominated by two parties, Y and Z. Party Y is committed to the truth of propositions A, B and C, while Party Z is committed to the falsity of A, B and C. Slowly the evidence mounts: A, B and C look very likely to be false. Observers in the media and experts in the education system begin to see this, but the evidence isn’t quite plain enough for non-experts, especially if those non-experts are aligned with Party Y and already committed to A, B and C....

[continued here]

Wednesday, January 25, 2017

Fiction Writing Workshop for Philosophers in Oxford, June 1-2

... the deadline for application is Feb. 1.

It's being run by the ever-awesome Helen De Cruz, supported by the British Society of Aesthetics. The speakers/mentors will be James Hawes, Sara L. Uckelman, and me.

More details here.

If you're at all interested, I hope you will apply!

Tuesday, January 24, 2017

The Philosopher's Rationalization-O-Meter

Usually when someone disagrees with me about a philosophical issue, I think they're about 20% correct. Once in a while, I think a comment is just straightforwardly wrong. Very rarely, I find myself convinced that the person who disagrees is correct and my original view was mistaken. But for the most part, it's a remarkable consistency: The critic has a piece of the truth, but I have more of it.

My inner skeptic finds this to be a highly suspicious state of affairs.

Let me clarify what I mean by "about 20% correct". I mean this: There's some merit in what the disagreeing person says, but on the whole my view is still closer to correct. Maybe there's some nuance that they're noticing, which I elided, but which doesn't undermine the big picture. Or maybe I wasn't careful or clear about some subsidiary point. Or maybe there's a plausible argument on the other side which isn't decisively refutable but which also isn't the best conclusion to draw from the full range of evidence holistically considered. Or maybe they've made a nice counterpoint which I hadn't previously considered but to which I have an excellent rejoinder available.

In contrast, for me to think that someone who disagrees with me is "mostly correct", I would have to be convinced that my initial view was probably mistaken. For example, if I argued that we ought to expect superintelligent AI to be phenomenally conscious, the critic ought to convince me that I was probably mistaken to assert that. Or if I argue that indifference is a type of racism, the critic ought to convince me that it's probably better to restrict the idea of "racism" to more active forms of prejudice.

From an abstract point of view, how often ought I expect to be convinced by those who object to my arguments, if I were admirably open-minded and rational?

For two reasons, the number should be below 50%:

1. For most of the issues I write about, I have given the matter more thought than most (not all!) of those who disagree with me. Mostly I write about issues that I have been considering for a long time or that are closely related to issues I've been considering for a long time.

2. Some (most?) philosophical disputes are such that even ideally good reasoners, fully informed of the relevant evidence, might persistently disagree without thereby being irrational. People might reasonably have different starting points or foundational assumptions that justify persisting disagreement.

Still, even taking 1 and 2 together, it seems that it should not be a rarity for a critic to raise an interesting, novel objection that I hadn't previously considered and which ought to persuade me. This is clear when I consider other philosophers: Often they get objections (sometimes from me) which, in my judgment, nicely illuminate what is incorrect in their views, and which should rationally lead them to change their views -- if only they weren't so defensively set upon rebutting all critiques! I doubt I am a much better philosopher than they are, wise enough to have wholly excellent opinions; so I must sometimes hear criticisms that ought to cause me to relinquish my views.

Let me venture to put some numbers on this.

Let's begin by excluding positions on which I have published at least one full-length paper. For those positions, considerations 1 and 2 plausibly suggest rational steadfastness in the large majority of cases.

A more revealing target is half-baked or three-quarters-baked positions on contentious issues: anything from a position I have expressed verbally, after a bit of thought, in a seminar or informal discussion, up to approximately a blog post, if the issue is fairly new to me.

Suppose that about 20% of the time what I say is off-base in a way that should be discoverable to me if I gave it more thought, in an reasonably open-minded, even-handed way. Now if I'm defending that off-base position in dialogue with someone substantially more expert than I, or with a couple of peers, or with a somewhat larger group of people who are less expert than I but still thoughtful and informed, maybe I should expect that about half to 3/4 of the time I'll hear an objection that ought to move me. Multiplying and rounding, let's say that about 1/8 of the time, when I put forward a half- or three-quarters-baked idea to some interlocutors, I ought to hear an objection that makes me think, whoops, I guess I'm probably mistaken!

I hope this isn't too horrible an estimate, at least for a mature philosopher. For someone still maturing as a philosopher, the estimate should presumably be higher -- maybe 1/4. The estimate should similarly be higher if the half- or three-quarters-baked idea is a critique of someone more expert than you, concerning the topic of their philosophical expertise (e.g., pushing back against a Kant expert's interpretation of a passage of Kant that you're interested in).

Here then are two opposed epistemic vices: being too deferential or being too stubborn. The cartoon of excessive deferentiality would be the person who instantly withdraws in the face of criticism, too quickly allowing that they are probably mistaken. Students are sometimes like this, but it's hard for a really deferential person to make it far as a professional philosopher in U.S. academic culture. The cartoon of excessive stubbornness is the person who is always ready to cook up some post-hoc rationalization of whatever half-baked position happens to come out of their mouth, always fighting back, never yielding, never seeing any merit in any criticisms of their views, however wrong their views plainly are. This is perhaps the more common vice in professional philosophy in the U.S., though of course no one is quite as bad as the cartoon.

Here's a third, more subtle epistemic vice: always giving the same amount of deference. Cartoon version: For any criticism you hear, you think there's 20% truth in it (so you're partly deferential) but you never think there's more than 20% truth in it (so you're mostly stubborn). This is what my inner skeptic was worried about at the beginning of this post. I might be too close to this cartoon, always a little deferential but mostly stubborn, without sufficient sensitivity to the quality of the particular criticism being directed at me.

We can now construct a rationalization-o-meter. Stubborn rationalization, in a mature philosopher, is revealed by not thinking your critics are right, and you are wrong, at least 1/8 of the time, when you're putting forward half- to three-quarters-baked ideas. If you stand firm in 15 out of 16 cases, then you're either unusually wise in your half-baked thoughts, or you're at .5 on the rationalization-o-meter (50% of the time that you should yield you offer post-hoc rationalizations instead). If you're still maturing or if you're critiquing an expert on their own turf, the meter should read correspondingly higher, e.g., with a normative target of thinking you were demonstrably off-base 1/4 or even half the time.

Insensitivity is revealed by having too little variation in how much truth you find in critics' remarks. I'd try to build an insensitivity-o-meter, but I'm sure you all will raise somewhat legitimate but non-decisive concerns against it.

[image modified from source]

Monday, January 23, 2017

Reminder: Philosophical Short Fiction Contest Deadline Feb 1

Reminder: We are inviting submissions for the short story competition “Philosophy Through Fiction”, organized by Helen De Cruz (Oxford Brookes University), with editorial board members Eric Schwitzgebel (UC Riverside), Meghan Sullivan (University of Notre Dame), and Mark Silcox (University of Central Oklahoma). The winner of the competition will receive a cash prize of US$500 (funded by the Berry Fund of the APA) and their story will be published in Sci Phi Journal.

Full call here.

Monday, January 16, 2017

AI Consciousness: A Reply to Schwitzgebel

Guest post by Susan Schneider

If AI outsmarts us, I hope its conscious. It might help with the horrifying control problem – the problem of how to control superintelligent AI (SAI), given that SAI would be vastly smarter than us and could rewrite its own code. Just as some humans respect nonhuman animals because animals feel, so too, conscious SAI might respect us because they see within us the light of conscious experience.

So, will an SAI (or even a less intelligent AI) be conscious? In a recent Ted talk, Nautilus and Huffington Post pieces, and some academic articles (all at my website) I’ve been urging that it is an important open question.

I love Schwitzgebel's reply because he sketches the best possible scenario for AI consciousness: noting that conscious states tend to be associated with slow, deliberative reasoning about novel situations in humans, he suggests that SAI may endlessly invent novel tasks – e.g., perhaps they posit ever more challenging mathematical proofs, or engage in an intellectual arms race with competing SAIs. So SAIs could still engage in reasoning about novel situations, and thereby be conscious.

Indeed, perhaps SAI will deliberately engineer heightened conscious experience in itself, or, in an instinct to parent, create AI mindchildren that are conscious.

Schwitzgebel gives further reason for hope: "...unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing." He also writes: "Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing."

Both of us agree that leading scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus, and that these approaches also associate consciousness with some sort of broad information sharing from a central system or global workspace (see Ch. 2 of my Language of Thought: a New Philosophical Direction where I mine Baars' Global Workspace Theory for a computational approach to LOT's central system).

Maybe it is just that I'm too despondent since Princess Leah died. But here's a few reasons why I still see the glass half empty:

a. Eric's points assume that reasoning about novel situations, and centralized, deliberative thinking more generally, will be implemented in SAI in the same way they are in humans – i.e., in a way that involves conscious experience. But the space of possible minds is vast: There could be other architectural ways to get novel reasoning, central control, etc. that do not involve consciousness or a global workspace. Indeed, if we merely consider biological life on Earth we see intelligences radically unlike us (e.g., slime molds, octopuses); there will likely be radically different cognitive architectures in the age of AGI/superintelligence.

b. SAI may not have a centralized architecture in any case. A centralized architecture is a place where highly processed information comes together from the various sensory modalities (including association areas). Consider the octopus, which apparently has more neurons in its arms than in its brain. The arms can carry out activity without the brain; these activities do not need to be coordinated by a central controller or global workspace in the brain proper. Maybe a creature already exists, elsewhere in the universe, that has even less central control than the octopus.

Indeed, coordinated activity doesn't require that a brain region or brain process be a place where it all comes together, although it helps. There are all kinds of highly coordinated group activities on Earth, for instance (the internet, the stock market). And if you ask me, there are human bodies that are now led by coordinated conglomerates without a central controller. Here, I am thinking of split brain patients, who engage in coordinated activity (i.e., the right and left limbs seem to others to be coordinated). But the brain has been split through removal of the corpus collosum, and plausibly, there are two subjects of experience there. The coordination is so convincing that even the patent's spouse doesn't realize there are two subjects there. It takes highly contrived laboratory tests to determine that the two hemispheres are separate conscious beings. How could this be? Each hemisphere examines the activity of the other hemisphere (the right hemisphere observes the behavior of the limb it doesn't control, etc.) And only one hemisphere controls the mouth.

c. But assume the SAI or AGI has a similar cognitive architecture as we do; in particular, assume it has an integrated central system or global workspace (as in Baars' Global Workspace Theory). I still think consciousness is an open question here. The problem is that only some implementations of a central system (or global workspace) may be conscious, while others may not be. Highly integrated, centralized information processing may be necessary, but not sufficient. For instance, it may be that the very properties of neurons that enable consciousness, C1-C3, say, are not ones that AI programs need to reproduce to get AI systems that do the needed work. Perhaps AI programmers can get sophisticated information processing without needing to go as far as build systems to instantiate C1-C3. Or perhaps a self-improving AI may not bother to keep consciousness in its architecture, or lacking consciousness, it may not bother to engineer it in, as its final and instrumental goals may not require it. And who knows what their final goals will be; none of the instrumental goals Bostrom and others identify require consciousness (goal content integrity, cognitive enhancement,etc.)

Objection (Henry Shevlin and others): am I denying that it is nomologically possible to create a copy of a human brain, in silicon or some other substance, that precisely mimics the causal workings of the brain, including consciousness?

I don't deny this. I think that if you copy the precise causal workings of cells in a different medium you could get consciousness. The problem is that it may not be technologically feasible to do so. (An aside: for those who care about the nature of properties, I reject pure categorical properties; I have a two-sided view, following Heil and Martin. Categoricity and dispositionality are just different ways of viewing the same underlying property—two different modes of presentation, if you will. So consciousness properties that have all and only the same dispositions are the same type of property. You and your dispositional duplicate can't differ in your categorical properties then. Zombies aren't possible.)

It seems nomologically possible that an advanced civilization could build a gold sphere the size of Venus. What is the probability this will ever happen though? This depends upon economics and sociology – a civilization would need to be a practical incentive to do this. I bet it will never happen.

AI is currently being built to do specific tasks better than us. This is the goal, not reproducing consciousness in machines. It may be that the substrate used to build AI is not a substrate that instantiates consciousness easily. Engineering consciousness in may be too expensive and time consuming. Like building the Venus-sized gold sphere. Indeed, given the ethical problems with creating sentient beings and then having them work for us, AI programs may aim to build systems that aren't conscious.

A response here is that once you get a sophisticated information processor, consciousness inevitably arises. Three things seem to fuel this view: (1) Tonini's information integration theory (IIT). But it seems to have counterexamples (see Scott Aaronson's blog). (2) Panpsychism/panprotopsychism. Even if one of these views is correct, the issue of whether a given AI is conscious is about whether the AI in question has the kind of conscious experience macroscopic subjects of experience (persons, selves, nonhuman animals) have. Merely knowing whether panpsychism or panprotopsychism is true does not answer this. We need to know which structural relations between particles lead to macroexperience. (3) Neural replacement cases. I.e., thought experiments in which you are asked to envision replacing parts of your brain (at time t1) with silicon chips that function just like neurons, so that in the end, (t2), your brain is made of silicon. You are then asked: intuitively are you still conscious? Do you think the quality of your consciousness would change? These cases only goes so far. The intuition is plausible that from t1 to t2, at no point would you would lose consciousness or have your consciousness diminished (see Chalmers, Lowe and Plantinga for discussion of such thought experiments). This is because a dispositional duplicate of your brain is created, from t1 to t2. If the chips are dispositional duplicates of neurons, sure, I think the duplicate would be conscious. (I'm not sure this would be a situation in which you survived though-see my NYT op-ed on uploading.) But why would an AI company build such a system from scratch, to clean your home, be a romantic partner, advise a president, etc?

Again, is not clear, currently, that just by creating a fast, efficient program ("IP properties") we have also copied the very same properties that give rise to consciousness in humans ("C properties"). It may require further work to get C properties, and in different substrates, it may be hard, far more difficult than building a biological system from scratch. Like creating a gold sphere the size of Venus.

Sunday, January 08, 2017

Against Charity in the History of Philosophy

Peter Adamson, host of History of Philosophy Without Any Gaps, recently posted twenty "Rules for the History of Philosophy". Mostly, they are terrific rules. I want to quibble with one.

Like almost every historian of philosophy I know, Adamson recommends that we be "charitable" to the text. Here's how he puts it in "Rule 2: Respect the text":

This is my version of what is sometimes called the "principle of charity." A minimal version of this rule is that we should assume, in the absence of fairly strong reasons for doubt, that the philosophical texts we are reading make sense.... [It] seems obvious (to me at least) that useful history of philosophy doesn't involve looking for inconsistencies and mistakes, but rather trying one's best to get a coherent and interesting line of argument out of the text. This is, of course, not to say that historical figures never contradicted themselves, made errors, and the like, but our interpretations should seek to avoid imputing such slips to them unless we have tried hard and failed to find a way of resolving the apparent slip.

At first pass, it seems a good idea to avoid imputing contradictions and errors, and to seek a coherent, sensible interpretation of historical texts "unless we we have tried hard and failed to find a way of resolving the apparent slip". This is how, it seems, to best "respect the text".

To see why I think charity isn't as good an idea as it seems, let me first reveal my main reason for reading history of philosophy: It's to gain a perspective, through the lens of distance, on my own philosophical views and presuppositions, and on the philosophical attitudes and presuppositions of 21st century Anglophone philosophy generally. Twenty-first century Anglophone philosophy tends to assume that the world is wholly material (with the exception of religious dualists and near cousins of materialists, like property dualists). I'm inclined to accept the majority's materialism. Reading the history of philosophy helpfully reminds me that a wide range of other views have been taken seriously over time. Similarly, 21st century Anglophone philosophy tends to favor a certain sort of liberal ethics, with an emphasis on individual rights and comparatively little deference to traditional rules and social roles -- and I tend to favor such an ethics too. But it's good to be vividly aware that wonderful thinkers have often had very different moral opinions. Reading culturally distant texts reminds me that I am a creature of my era, with views that have been shaped by contingent social factors.

Of course, others might read history of philosophy with very different aims, which is fine.

Question: If this is my aim in reading history of philosophy, what is the most counterproductive thing I could do when confronting a historical text?

Answer: Interpret the author as endorsing a view that is familiar, "sensible", and similar to my own and my colleagues'.

Historical texts, like all philosophical texts -- but more so, given our linguistic and cultural distance -- tend to be difficult and ambiguous. Therefore, they will admit of multiple interpretations. Suppose, then, that there's a text admitting of four possible interpretations: A, B, C, and D, where Interpretation A is the least challenging, least weird, and most sensible, and Interpretation D is the most challenging, weirdest, and least sensible. A simple application of the principle of charity seems to recommend that we favor the sensible, pedestrian Interpretation A. In fact, however, weird and wild Interpretation D would challenge our presuppositions more deeply and give us a more helpfully distant perspective. This is one reason to favor Interpretation D. Call this the Principle of Anti-Charity.

Admittedly, this way of defending of Anti-Charity might seem noxiously instrumentalist. What about historical accuracy? Don't we want the interpretation that's most likely to be true?

Bracketing post-modern views that reject truth in textual interpretation, I have four responses to that concern:

1. Being Anti-Charitable doesn't mean that anything goes. You still want to respect the surface of the text. If the author says "P", you don't want to attribute the view not-P. In fact, it is the more "charitable" views that are likely to take the author's claims other than at face value: "The author says P, but really a charitable, sensible interpretation is that the author really meant P-prime". In one way, it is actually more respectful to the texts not to be too charitable, and to interpret the text superficially at face value. After all, P is what the author literally said.

2. What seems "coherent" and "sensible" is culturally variable. You might reject excessive charitableness, while still wanting to limit allowable interpretations to one among several sensible and coherent ones. But this might already be too limiting. It might not seem "coherent" to us to embrace a contradiction, but some philosophers in some traditions seem happy to accept bald contradictions. It might not seem "sensible" to think that the world is nothing but a flux of ideas, such that the existence of rocks depends entirely upon the states of immaterial spirits. So if there's any ambiguity, you might hope to tame views that seem metaphysically idealist, thereby giving those authors a more sensible, reasonable seeming view. But this might be leading you away from rather than toward interpretative accuracy.

3. Philosophy is hard and philosophers are stupid. The human mind is not well-designed for figuring out philosophical truths. Timeless philosophical puzzles tend to kick our collective asses. Sadly, this is going to be true of your favorite philosopher too. The odds are good that this philosopher, being a flawed human like you and me, made mistakes, fell into contradictions, changed opinions, and failed to see what seem to be obvious consequences and counterexamples. Respecting the text and respecting the person means, in part, not trying too hard to smooth this stuff away. The warts are part of the loveliness. They are also a tonic against excessive hero worship and a reminder of your own likely warts and failings.

4. Some authors might not even want to be interpreted as having a coherent, stable view. I have recently argued that this is the case for the ancient Chinese philosopher Zhuangzi. Let's not fetishize stable coherence. There are lots of reasons to write philosophy. Some philosophers might not care if it all fits together. Here, attempting "charitably" to stitch together a coherent picture might be a failure to respect the aims and intentions implicit in the text.

Three cheers for the weird and "crazy", the naked text, not dressed in sensible 21st century garb!

-----------------------------------------------

Related post: In Defense of Uncharitable and Superficial History of Philosophy (Aug 17, 2012)

(HT: Sandy Goldberg for discussion and suggestion to turn it into a blog post)

[image source]

Sunday, January 01, 2017

Writings of 2016, and Why I Love Philosophy

It's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012, 2013, 2014, and 2015.)

Two landmarks: my first full-length published essay on the sociology of philosophy ("Women in philosophy", with Carolyn Dicey Jennings), and the first foreign-language translations of my science fiction ("The Dauphin's Metaphysics" into Chinese and Hungarian).

Recently, I've been thinking about the value of doing philosophy. Obviously, I love reading, writing, and discussing philosophy, on a wide range of topics -- hence all the publications, the blog, the travel, and so forth. Only love could sustain that. But do I love it only in the way that I might love a videogame -- as a challenging, pleasurable activity, but not something worthwhile? No, I do hope that in doing philosophy I am doing something worthwhile.

But what makes philosophy worthwhile?

One common view is that studying philosophy makes you wiser or more ethical. Maybe this is true, in some instances. But my own work provides reasons for doubt: With Joshua Rust, I've found that ethicists and non-ethicist philosophers behave pretty much the same as professors who study other topics. With Fiery Cushman, I've found evidence that philosophers are just as subject to irrational order effects and framing effects in thinking about moral scenarios, even scenarios on which they claim expertise. With Jon Ellis, I've argued that there's good reason to think that philosophical and moral thought may be especially fertile for nonconscious rationalization, including among professors of philosophy.

Philosophy might still be instrumentally worthwhile in various ways: Philosophers might create conceptual frameworks that are useful for the sciences, and they might helpfully challenge scientists' presuppositions. It might be good to have philosophy professors around so that students can improve their argumentative and writing skills by taking courses with them. Public philosophers might contribute usefully to political and cultural dialogue. But none of this seems to be the heart of the matter. Nor is it clear that we've made great progress in answering the timeless questions of the discipline. (I do think we've made some progress, especially in carving out the logical space of options.)

Here's what I would emphasize instead: Philosophy is an intrinsically worthwhile activity with no need of further excuse. It is simply one of the most glorious, awesome facts about our planet that there are bags of mostly-water that can step back from ordinary activity and reflect in a serious way about the big picture, about what they are, and why, and about what really has value, and about the nature of the cosmos, and about the very activity of philosophical reflection itself. Moreover, it is one of the most glorious, awesome facts about our society that there is a thriving academic discipline that encourages people to do exactly that.

This justification of philosophy does not depend on any downstream effects: Maybe once you stop thinking about philosophy, you act just the same as you would have otherwise acted. Maybe you gain no real wisdom of any sort. Maybe you learn nothing useful at all. Even so, for those moments that you are thinking hard about big philosophical issues, you are participating in something that makes life on Earth amazing. You are a piece of that.

So yes, I want to be a piece of that too. Welcome to 2017. Come love philosophy with me.

-----------------------------------

Full-length non-fiction essays appearing in print in 2016:

    The behavior of ethicists” (with Joshua Rust), in J. Sytsma and W. Buckwalter, eds., A Companion to Experimental Philosophy (Wiley-Blackwell).
Full-length non-fiction finished and forthcoming:
Shorter non-fiction:
Editing work:
    Oneness in philosophy, religion, and psychology (with P.J. Ivanhoe, O. Flanagan, R. Harrison, and H. Sarkissian), Columbia University Press (forthcoming).
Non-fiction in draft and circulating:
Science fiction stories:
    "The Dauphin's metaphysics" (orig. published in Unlikely Story, 2015).
      - translated into Hungarian for Galaktika, issue 316.
      - translated into Chinese for Science Fiction World, issue 367.
Some favorite blog posts:
Selected interviews:

[image modified from here]