Thursday, August 01, 2019

Industrial-Grade Realism about Beliefs That P

I favor a dispositional approach to belief according to which believing something resembles having a personality trait. You believe, or you have a personality trait, to the extent you have a dispositional profile that approximates a certain ideal. To be extraverted, for example, is to be disposed, in general, to enjoy parties, to like meeting new people, to be talkative in social contexts, etc. Similarly, to believe that women and men are equally academically intelligent is to be disposed, in general, to affirm that it is so, to expect academically intelligent remarks from women no less than men, to be as ready to hire a woman as a man for a job requiring academic intelligence, etc. To believe that there is beer in the fridge is to be disposed, in general, to act and react beer-in-the-fridge-ishly (going to the fridge if one wants a beer, etc.).

(Historically, dispositional approaches are rooted in the behaviorist tradition, but my own dispositionalism focuses not just on behavioral dispositions but also phenomenal dispositions [e.g., not reacting with a feeling of surprise] and cognitive dispositions [e.g., engaging in a pattern of conscious reasoning that only makes sense on the background assumption that P is true].)

My view is a kind of soft instrumentalism: The belief that P is not some thing in the head. Rather, when we attribute a belief, we are using a shorthand that approximately captures, in Daniel Dennett's phrase, a "real pattern" in a complex landscape of potential actions and reactions. Believing that P no more requires a real stored representation with the content "P" than being an extravert requires a switch flipped to "E" in your personality-settings box.

There's an alternative view that I will label industrial-grade realism (again adapting a phrase from Dennett). According to industrial-grade realism, believing that P normally requires that a representation with the content "P" be stored somewhere in the functional architecture of the mind, ready to be activated and deployed when one does belief-that-P-ish things like, in our examples, criticizing a colleague's apparent sexism or strolling over to the fridge for a beer. Industrial-grade realism seems to undergird Jake Quilty-Dunn and Eric Mandelbaum's recent critique of my dispositionalist account, and was perhaps most prominently advocated by Jerry Fodor (e.g., in his 1987 book).

To my ear, the following four theses seem to be implicit in the industrial-grade realism of Fodor, Quilty-Dunn, and Mandelbaum. If not, I'd be interested to see textual evidence that they reject them.

Presence. In normal (non-"tacit") cases, belief that P requires that a representation with the content P be present somewhere in the mind.

Discreteness. In normal (non-"marginal") cases, a representation P will be either discretely present in or discretely absent from a cognitive system or subsystem.

Kinematics. Rational actions arise from the causal interaction of beliefs that P and desires that Q, in virtue of their specific contents P and Q.

Specificity. Rational action arises from the activation or retrieval of some specific sets of beliefs and desires P1…n and Q1…n and not from possibly closely logically related beliefs and desires P'1…m and Q'1…m.

[Beliefs that P, hard at work]

These four theses combined constitute a commitment to a very specific type of cognitive architecture. This architecture seems to me to be rather in keeping with the old-fashioned cognitive science and computer science of the 1970s and 1980s -- so it's easy to see why Fodor would have been attracted to it. It fits less easily, however, with deep learning, statistical approaches to memory of visual ensembles, and other recent approaches according to which cognition proceeds by means of processes that are highly complex, not especially language-like, and don't employ representational structures that map neatly onto the types of belief contents that we normally attribute to people (like "there is beer in the fridge").

Let me pose, then, this trilemma for industrial-grade realists:

(1.) Commit to the four theses and the seemingly old-fashioned cognitive architecture. Downside: This would be a risky empirical bet against some powerful recent trends.

(2.) Allow the possibility that the underlying representations are very different in structure and content than "women and men are intellectually equal" and "there's beer in the fridge". Downside: It is no longer clear what the causal story is supposed to be, about which realism is true. Heading to the fridge for beer because one believes there is beer in the fridge is now no longer explained by accessing a representation with the content "there is beer in the fridge". Furthermore, closely related views might also require major revision. Having specific propositional contents that can be verbally expressed and shared among people is part of the picture behind, for example, Fodor's anti-holism (see also my critique of Elizabeth Camp). Specifically P and not-P contents also seems central to Mandelbaum's contradictory-belief account of implicit bias.

(3.) Thread the needle: Go for something weaker and less architecturally commissive than the four theses, yet strong enough to be a substantive empirical commitment to the real causal power of representations of P when we believe that P. Downsides: As far as I can tell, both of the above.

The dispositional approach to attitudes is superficial in a certain respect: It doesn't commit to any underlying architectural implementation. As long you have the appropriate dispositional patterns of action and reaction, you believe, whatever unintuitive haywire architecture lies beneath. This superficiality is a virtue, not a vice. In cognitive science it leave questions open which should remain open about the underlying architecture, and it keeps the focus on what we philosophers and ordinary belief-ascribers do and should care about in thinking about belief: how we act in and react to the world.

[image source]

26 comments:

James of Seattle said...

A recent paper by Paul Cisek (“Resynthesizing behavior through phylogenetic refinement”) introduced to me the concept of “pragmatic representation”, in which, instead of representing “there’s a beer on the fridge”, the representation is more like “given a desire for beer, walking to the fridge, opening the door, etc., would be a good thing to do”. So again, a representation, but not so much of objects as actions. Would this be useful in the context you write about here?

*

Eric Schwitzgebel said...

Thanks for the comment, James! The idea of pragmatic representations in general seems reasonable; but I would worry adopting an architectural commitment to one with that sort of folksy content.

D said...

I've been thinking about this with respect to natural-language-generation AI tools like GPT-2. They've recently gotten a lot better, to the point where they make sense for two or three sentences in a row, and manage to more-or-less stay on topic. If you input a list of questions and answers and end with a question, the first thing it generates is often the correct answer.
However, it will also often contradict itself, because this is all based on statistics and it doesn't have any kind of checks that it is maintaining any kind of consistency. All its "beliefs" as expressed in what it says are implicit in the probabilities of generating a particular word, and depending on how the question is asked, its expressed "beliefs" are different. So does the system even have beliefs? And if not, maybe I don't either, and I just have a propensity to reply in a particular way if the question is asked in the right way.

jed said...

I like this approach and will read the paper (from 2002!). I'm pretty sure it is compatible with the cognitive architecture we've started to see emerging from deep learning experiments.

The consistency issues raised by D certainly need to be addressed within this emerging cognitive architecture. Happily we have adequate evidence that they can be without reintroducing anything equivalent to "propositions". See for example the recent success of deep learning approaches to addressing the "Winograd schema" cases -- long distance purely semantic consistency requirements between parts of a sentence (or paragraph, or...)

But as a non-philosopher I wonder how dispositional accounts can handle the existing philosophical burdens normally loaded onto beliefs (or more generally propositions). Personally I mostly don't find the current accounts credible so would very much like to see alternatives built using this approach.

I think very broadly rejecting accounts dependent on propositions *as cognitive content* is the right way to go -- adopting instead a dispositional framework. (Has this been worked out as a general alternative? If so references would be helpful.) My sense is that this would be a tectonic change in a lot of philosophical discourse, so I expect adoption will be slow.

Eric Mandelbaum said...

Hi Eric,

Just writing to say that I think this is a fair and charitable characterization of our position (I say "our" because I think Jake agrees with me but of course all errors below should be attributable to me alone).

I have some quibbles, but really the quibbles seem to be on points that are orthogonal to the core of the debate. The quibbles: I'd tweak the Kinematics and Specificity requirements. I think rational actions can arise in lots of different ways, and anyway I think I prefer the distinction between action and behavior over the rational vs irrational action. My type of psychofunctionalisms isn't even necessarily committed to there even being desires--I think beliefs themselves can drive action (maybe we want to call this action rationalizing instead of rational action, or action that the computational mind can see as rational even though as epistemologists we might deem them wanting in some way or other). This is all a much longer story (one that I'll have a book out on any year now...) but the main beats are easily hummable: if we can recreate chains of behavior as consequences of transitions between contentful mental states, we can get an idea of rational action off the ground, even if the actions themselves wouldn't count as broadly rational from other criteria. We can then distinguish behavior from action in terms of the need to posit contentful states, or maybe even computational states. But again this seems like a quibble about things that aren't the core of the issue of belief. Questions of whether desires exist or whether other motivational states would suffice to interact with beliefs to motivate behavior; or what counts as a rational action vs irrational action vs mere behavior; or how we categorize associative trains of thought which affect behavior/action, [are they computational or subcomputational] etc. are all interesting and one's I have a position on, but I don't think the peculiarities of our psychofunctional theory of belief matter for your characterization of Industrial Strength Realism.

(Second half of the post coming in a second)

Eric Mandelbaum said...


As for your options I'm inclined to take option 1. My student Nic Porot and I are writing a paper now about the breadth and rebirth of the LoT--I think one sees it everywhere nowadays! Nic just wrote a fascinating dissertation arguing for the existence of LoT in a wide range animal minds (including insects!). Extending the arguments for LoT to creatures without a natural language is really an interesting, fecund new direction to push the debate. Going in the other direction, the rise of Bayesianism is coincident with the rise of the Probabilistic LoT. Since Bayesianism is anything but old fashioned, I think we are doing OK keeping up with current trends. If anything I'd think (and Nic and I argue) that we now have even more power reason to believe in LoT.

That's not to say I'm numb to the examples Eric gave. There is definitely a sense in which ensemble perception and deep learning sit poorly with the LoT. But this doesn't keep me up at night. Ensemble perception is, for me, part of a general modular architecture which itself never sat well with LoT per se, but that's OK as LoT was a thesis about cognition not perception. (And this should keep us up at night even less that it should've kept Fodor as Jake and I both have separately worked on understanding how the interface of perception and cognition might work for an LoT story [I have in mind Jake's idea about how discrete and iconic elements are outputted by perception [and his/EJ Green's work on object files in particular] and work on conceptualized perception [as in my PPR paper]]). Deep learning is a different story--that seems more deeply at odds with the LoT. But I don't think deep learning is a model of our mind at all so this doesn't worry me much (I don't think Hinton even thinks of it as a possible model of the human mind anymore). Of course that's not an argument, just a statement of faith. Nonetheless it describes my faith pretty well!

(And perhaps I should add that Jake has pointed out that it's not totally clear that deep learning is incompatible with LoT as deep learning models make it hard to know exactly what the system ends up representing).

Big picture: I think everything Eric said is totally fair and I appreciate the faithful characterization.

e

Anonymous said...

you might be interested in this talk by Andy Clark in response to some papers given in his honor:
https://www.youtube.com/watch?v=Iw4YxgtqCpo
-dmf

Devin Curr said...

Erics,

Thanks for this interesting exchange. I wonder what you each make of a fourth option for industrial-strength realists: distinguishing between beliefs qua objects of folk psychology and beliefs qua (present, discrete, kinematic, specific) psychofunctional cogs in cognitive systems. (I briefly discuss this distinction in my "Interpretivism and norms" in Phil Studies, and fully develop it in a couple of articles in prep.) Making this distinction allows beliefs to exist as real patterns, no matter the underlying cognitive architecture, without prematurely ruling out the hypothesis that a new-fangled LoT could also countenance beliefs as industrially grade real. The distinction is also compatible with lots of different stories about how psychofunctional beliefs (if they exist) underlie folk psychological beliefs: there could be a one-to-one match, or some of our folk-beliefs might be generated by psychofunctional-beliefs in the LoT whereas others are generated by visual ensembles (or whatever). Such a mixed story (which, for what little it's worth, follows my hunch) might help explain why Mandelbaum's cognitive architecture doesn't need desires (even though desires exist qua real patterns), as well as why there's a gap between Mandelbaum's rationality and the epistemologist's rationality (the latter of which is presumably pitched at a folk psychological level of explanation).

In any case, if my distinction holds water, then the Erics may have different empirical bets about cognitive architecture, but need have no real disagreement about the nature of belief. Schwitzgebel need not be superficial about all possible belief-phenomena, and Mandelbaum need not be an industrial-grade realist about all belief-phenomena.

Callan said...

Seems like strawmanning to me, but not maliciously. Rather out of some prior belief.

To me it seems like you are taking it that the 'underlying representations' are so different in structure that they are not in the head, Eric?

Also to me it just seems to be a false dichotomy that precludes a third option, albeit a very nihilistic one to deal with.

To avoid getting too nihilistic; currently the whole 'underlying representation' reminds me of the idea of homonculi in the head. I actually think if you take it to a cellular level the homonculi idea isn't too out there. If you keep reducing each supposed homunculai down eventually each one is itself a cell - and cells certainly exist and the certainly do things. Brain cells and how body cells send chemical triggers to each other.

So what about a third option of breaking down the belief in beer down to the very cells that send triggers to other cells that in turn keep sending triggers and more until the person is up and opening the fridge door?

So, what's wrong with that as another option? I'm assuming it wont sit well?

David Duffy said...

I was thinking about this post walking down the side of the house, when I was struck by the possibility I may not have closed the back door because I was so distracted. I may have a disposition to be anxious that I have left something undone, but I don't think I have a disposition to check or not check that the door was closed at 8 am on Sunday the 4th. Further, I may only now have a memory at t0+30 of remembering to consult my immediate memory of the door at t0+5 but not the sense impression of the actual door closing at t0. That second hand knowledge strikes me as more consistent with a representation.

Eric Schwitzgebel said...

Thanks for all of the comments, folks! Sorry for the slow reply.

D: Yes, except having the propensity might be enough to be worth calling "belief" in a less industrial-grade sense!

Jed: I'm glad you think it's promising! The dispositional approach hasn't been worked out in detail. It has been developed more as a general structure. To start getting more specific about the framework would require more empirical detail.

Eric: Thanks for those helpful comments! I'm glad I've got your general view correct, apart from the quibbles. I agree Bayesianism is fashionable -- but I'm not so sure about the resurgence on LoT! It could be an "east pole" - "west pole" thing, to use the old cog-sci lingo.

dmf: Thanks for the link. I'll check it out.

Devin: Yes, I'm sympathetic with that. I think what you express is closer to my view than Eric M's. The superficialist approach is non-committal about the underlying structure (though I have my bets). It's completely compatible with superficialism that industrial-grade realist LoT is how beliefs are implemented. But, per superficialism, we shouldn't *define* belief that way.

Callan: I think that's a reasonable option. I don't think it's in tension with superficialism about belief. I'm an industrial-grade realist about neurons. It's only the middle level of representations that P that I'm an instrumentalist about.

David: Uncertainty is a complicated case for all models of belief, but I'd suggest this. You have about an 90% credence that you closed your door in virtue of what you would be disposed to say, how you would bet, whether you would check it if the costs were low, the level of anxiety you feel, etc. On having a memory of a memory: Memory could probably be understood representationally, but my guess is that it's not going to be a content as clean as: I remember: "At t+5 I consulted my memory of the door". That sort of propositional content seems far too simple.

EM said...

Thanks for comments, and sorry for the slow replies too. Devin my first reaction is that I think your proposal is totally reasonable. It might seem sorta weird that Schwitzgebel and I agree with this and yet still disagree but I think it makes some sense if you look at our project in a different light. I don't care to define belief per se, I'm more interested in the question of whether there is a spiritual similarity between belief as used in folk psychology and some state or other in cognitive science. I want enough overlap that it can explain how our folk psychology works so well, and enough slippage to explain why it also fails to work. I'm not opposed to the idea that ES and I don't really disagree about the nature of belief since that lofty metaphysical question is one that I'm happy to avoid as I think it's orthogonal to the core of my project.

Devin Curry said...

Thanks, Erics.

Schwitzgebel: I think our philosophical projects are more closely related to each other than to Mandelbaum's, but, especially in light of M's last comment, I'm not sure whether our views are closer (on this specific issue).

Mandelbaum: that makes total sense, and I'm very sympathetic to your project. (And, for what it's worth, I think the ways in which you recognize slippage are a happy development relative to the views of, e.g., Fodor.)

jed said...

@Eric:
> The dispositional approach hasn't been worked out in detail. It has been developed more as a general structure. To start getting more specific about the framework would require more empirical detail.

Thanks!

I don't know if there's even a rough consensus on the meaning of "disposition". However reflecting on my own rough understanding of philosophical usage there seem to be multiple cognates in other fields:

- Rebuttable presumption in law.

- Prior distribution in Bayesian statistics

- Meta-parameters and various other initial conditions in deep learning.

I'm sure there are many more.

All of these evolve / are learned / are adapted or adopted but more slowly than the response to immediate circumstances.

In neuroscience the most relevant cognate seems to be top down influence in a layered brain architecture. Top-down influence is estimated to be about 10 times the bandwidth of bottom-up influence. Taken (much too) literally this means that what we "perceive" or "do" is about 10 times more generated from expectations / dispositions than from local circumstances.

Local bottom-up information is still important in our perceptions / actions but mostly as a way to "configure" our dispositions so we apply a relevant / adaptive set of them.

A cautionary note: We should not think of this structure as topping out in some central "knower" or "decider", and also should not think of the dispositions as static, but instead think of each layer as a distributed set of fragments dynamically reconfiguring themselves into the best fit to the influences from above and below.

There are tons of journal articles on this topic, so if we accept the cognate relationship to "dispositions" we have plenty of empirical material to go on with. The translation however is non-trivial, especially because given your answer I guess it means building up a whole new conceptual structure / language in the philosophical domain.

Altogether an interesting path to pursue.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

EM: Yes, we have somewhat orthogonal projects, so there's more room for agreement than there might seem to be on the surface. My primary commitment is to superficialism about belief as a definitional/metaphysical matter. A secondary commitment is to the empirical implausibly of a language-of-thought-ish implementation of most beliefs; and I'm not yet decided on how commissive I want to be about that, or how much empirical wrestling I'm game for -- especially since I think both Eric and I agree that in some cases LoT or "P" can play a role (e.g., explicit conscious thoughts that P) and that in other cases non-LoT/non-P-ish architectures are at work, so it's partly a matter of proportion.

Jed: There's definitely a metaphysical literature in philosophy on the nature of dispositions (e.g., Mumford, Heil). I'm not sure about how to translate this into brain architecture. In a way, it's not about the architecture at all but rather about the tendency to do certain things in certain circumstances (which might be implemented by various underlying architectures). Could you tell me where you're getting this 10:1 ratio of top-down to bottom-up influence on perception and action? Surely it varies enormously depending on the type of perception and action?

Callan said...

I don't really understand 'middle level of representations' in regard to why that somehow escapes an industrial grade realist take on neurons?

Are you an instrumentalist or do you just find you can't think of representation at the industrial level - like in mechanical terms you can't. I can't. Nobody can - no one has inner access like that. We all act like instrumentalists (me as well) because we're stuck with that - it's like your post from awhile back with chop sticks on the back and the point where the person cannot tell if it is two chopsticks or one. Internally at a certain point we can't map an industrial grade realist take on neurons to representations. Just can't feel it. This could map to your middle level. So are you choosing to be an instrumentalist? Or are you defaulting to instrumentalism? Or are you taking instrumentalism to be true because of the 'evidence' for it that defaulting to it appears to make?

Or that's my take. Maybe it's a load of rubbish!? But I think someones oneself can actually hit the hardware limits of ones own brain. And one might take it as evidence of the way things are rather than an incapacity. Much the same as an illusion not seen as an illusion can seem to be the way things are, rather than an incapacity.

Thanks for your post, Eric :)

Eric Schwitzgebel said...

Callan, I think we're not too far apart here. I could imagine finding industrial-grade representations at the middle level. I just think the evidence tends the opposite direction, so I'm happy to be instrumentalist about them. I also agree that we might be running up against "hardware limits". A very different and more powerful mind might find connectionist, deep-learning structures very natural and intuitive.

Callan said...

Eric, I'm curious about that evidence. To me, much like Bakker's blind brain hypothesis, we just lack certain internal information about our brains (as much as we don't just intuitively know how our hearts work, or how our livers work, or how DNA works...and so on). Nature abhors an absence and to me so does imagination - we put 'here be dragons' on ancient maps where our knowledge ran out. One of the most intense examples of this is what happens inside our skulls - it's both kind of a black box and yet very intensely needs to be understood in others and in oneself (imagine being a black box for a moment and not at all knowing yourself).

To me there's a lot of room for more of an invention than evidence.

Also at a certain point if it comes down to industrial-grade implementations (things that impeliment actions in responce to industrial-grade representations) then even the representations idea runs out (without implementations it'll no doubt seem like a homonculous theory that relies on infinite homonculi viewing each others representations and thus the theory gets dropped in the end)

Eric Schwitzgebel said...

Sure, I basically agree with Bakker on this, and with what you're saying. What I'm saying is that the evidence we have is that there's no industrial-grade believe-that-P representation in there, combining with a believe-that-Q representation. If you look at how the brain works, and the failure of GOFAI models, and the success of deep learning -- that tilts empirically toward no industrial-grade-belief-that-P in there. Yes?

Callan said...

Internet ate my comment! So I'll just quickly note I don't understand how the success of deep learning or how the brain works means there is no industrial grade belief in there. I don't know if you're saying there isn't a 'belief' in there but instead something like a nuanced set of synaptic connections there - in which case I would get the idea that 'belief' isn't something in the head. Otherwise I don't know how looking at how the brain works actually says that there is no industrial grade belief in P, Eric?

Eric Schwitzgebel said...

The relationship between connectionist network patterns and language-like representations is contentious. But the basic thing that I think the industrial-grade view would predict in a connectionist network is a pattern of activation that is in some sense physiologically or functionally coherent that is present in systems that believe that P and activated when the belief that P is activated (e.g., this set of nodes activates). But the current best-guess I think, based on what deep learning looks like, is that there won't be such patterns. It will be much messier-looking, unintuitive, and hazy-bordered.

Callan said...

It feels like in Venn diagram terms one circle is 'P is not in the head' and another circle is 'P is in the head'.

To me it seems the situation is like a third circle that overlaps the two. Because taking the extrovert example, I think there is basically a switch set to something (or not set) in the personality settings. There's a whole bunch of dials and nature spins them on each new birth, blindly seeking the dial combinations that Darwinistically survive the best (and culture does adjust the dials positions often enough).

'P is in the head' seems to think belief would be recognizable at a fundamental physical level in the brain if observed. But by the same token 'P is not in the head' seems to treat it that belief would not be recognizable and is consigned to real patterns...but where is the acceptance in that of the physical situation (and how that physical situation manifests into us typing posts like the ones we have and are now)?

One side looks intently (but will ignore anything but that which they expect to see) and the other side looks away (almost in shame?), as if belief must be vindicated or banished.

I guess it might not appear that way and instead it's described as real patterns - but as much as it's real patterns rather than being attributed as the very origins of belief, to me it seems to be looking away. Makes me think of the picture of Dorian Grey? But that's my take.

Eric Schwitzgebel said...

Vindicated by setting a low bar, I'd say!

Callan said...

Not sure I understand the reply, Eric? To me it doesn't seem to be setting a low bar to refer to real patterns rather than beliefs and also say belief is outside the head. For myself it seems there is a difficulty in taking on that the origins of belief are something that (in comparison to how beliefs feel) appears something inhuman. It seems to be taking belief to be vindicated as is and as belief was always taken.

The approach here feels like a non association - like Superman/Belief and Clark Kent/real patterns are not being associated with each other. It's a bit presumptuous of me, but 'belief is outside the head' feels like saying Superman is outside the room because in the room there's only Clark Kent present.

So I'll leave it at that I don't agree belief is outside the head - which I'll note doesn't mean I think belief is in a superman form/supernatural form inside the head.


Eric Schwitzgebel said...

I'm not sure how we got on this inside/outside track, Callan. Most people would say that being an extravert is constituted by behavioral, emotional, and cognitive dispositions. If you agree with that, I'd say that belief is structured the same way. On my view, the presence of such disposition no more requires a stored representation of the content of the belief than the presence of extravesion requires that there be some specific dial in the brain with a setting somewhere on the spectrum from I to E.

Callan said...

Well I'd say behavior, emotion and cognitive disposition are derived, Eric. They don't just exist in themselves. I mean various tragic accounts of brain damage show peoples behavior changing, their emotions changing and cognitive disposition changing. The stuff they are derived from changed, so they changed. I genuinely think if an extrovert person suffers certain brain damage they will become an introvert or enter into some other behavior. At the very least amnesia is an example of a whole range of behaviors suddenly vanishing due to brain trauma.

Whether my position is of interest I dunno. But I'm describing it so you can see how much trouble I have processing the idea that belief is not in the head but it's also real patterns. My position probably matches the position of a number of neuro scientists and neuro psychologists who have studied various cases of brain damage and altered behavior, emotional responce and cognition - so it's probably worth trying to tackle their position at some point. I'd also think it largely matches Bakker's position, though he doesn't really want to believe it (understandably).

So in short I think behaviour, emotion and cognition do require a stored...well, structure rather than representation...in the brain. And to be more specific, they are a derivative of that storage. Otherwise I'm genuinely curious how you would say they would exist at all? But I know I've meandered a bit so thanks for your replies you've taken time to give, Eric, and I can understand if we leave it at that.