Infinitude is a strange and wonderful thing. It transforms the ridiculously improbable into the inevitable.
Now hang on to your hat and glasses. Today's line of reasoning is going to make mere Boltzmann continuants seem boring and mundane.
First, let's suppose that the universe is infinite. This is widely viewed as plausible (see Brian Greene and Max Tegmark).
Second, let's suppose that the Copernican Principle holds: We are not in any special position in the universe. This principle is also widely accepted.
Third, let's assume cosmic diversity: We aren't stuck in an infinitely looping variant of a mere (proper) subset of the possibilities. Across infinite spacetime, there's enough variety to run through every finitely specifiable possibility infinitely often.
These assumptions are somewhat orthodox. To get my argument going, we also need a few assumptions that are less orthodox, but I hope not wildly implausible.
Fourth, let's assume that complexity scales up infinitely. In other words, as you zoom out on the infinite cosmos, you don't find that things eventually look simpler as the scale of measurement gets bigger.
Fifth, let's assume that local actions on Earth have chaotic effects of an arbitrarily large magnitude. You know the Butterfly Effect from chaos theory -- the idea that a small perturbation in a complex, "chaotic" system can make a large-scale difference in the later evolution of the system. A butterfly flapping its wings in China could cause the weather in the U.S. weeks later to be different than it would have been if the butterfly hadn't flapped its wings. Small perturbations amplify. This fifth assumption is that there are cosmic-scale butterfly effects: far-distant, arbitrarily large future events that arise with chaotic sensitivity to events on Earth. Maybe new Big Bangs are triggered, or maybe (as envisioned by Boltzmann) given infinite time, arbitrarily large systems will emerge by chance from low-entropy "heat death" states, and however these Big Bangs or Boltzmannian eruptions arise, they are chaotically sensitive to initial conditions -- including the downstream effects of light reflected from Earth's surface.
Okay, that's a big assumption to swallow. But I don't think it's absurd. Let's just see where it takes us.
Sixth, given the right kind of complexity, evolutionary processes will transpire that favor intelligence. We would not expect such evolutionary processes at most spatiotemporal scales. However, given that complexity scales up infinitely (our fourth assumption) we should expect that at some finite proportion of spatiotemporal scales there are complex systems structured in a way that enables the evolution of intelligence.
From all this it seems to follow that what happens here on Earth -- including the specific choices you make, chaotically amplified as you flap your wings -- can have effects on a cosmic scale that influence the cognition of very large minds.
(Let me be clear that I mean very large minds. I don't mean galaxy-sized minds or visible-universe-sized minds. Galaxy-sized and visible-universe-sized structures in our region don't seem to be of the right sort to support the evolution of intelligence at those scales. I mean way, way up. We have infinitude to play with, after all. And presumably way, way slow if the speed of light is a constraint. Also, I am assuming that time and causation make sense at arbitrarily large scales, but maybe that can be weakened if necessary to something like contingency.)
Now at such scales anything little old you personally does would very likely be experienced as chance. Suppose for example that a cosmic mind utilizes the inflation of Big Bangs. Even if your butterfly effects cause a future Big Bang to happen this way rather than that way, probably a mind at that scale wouldn't have evolved to notice tiny-scale causes like you.
Far fetched. Cool, perhaps, depending on your taste in cool. Maybe not quite cosmic significance, though, if your decisions only feed a pseudo-random mega-process whose outcome has no meaningful relationship to the content of your decisions.
But we do have infinitude to play with, so we can add one more twist.
Here it is: If the odds of influencing the behavior of an arbitrarily large intelligent system are finite, and if we're letting ourselves scale up arbitrarily high, then (granting all the rest of the argument) your decisions will affect the behavior of an infinite number of huge, intelligent systems. Among them there will be some -- a tiny but finite proportion! -- such that the following counterfactual is true: If you hadn't made that upbeat, life-affirming choice you in fact just made, that huge, intelligent system would have decided that life wasn't worth living. But fortunately, partly as a result of that thing you just did, that giant intelligence -- let's call it Emily -- will discover happiness and learn to celebrate its existence. Emily might not know about you. Emily might think it's random or find some other aspect of the causal chain to point toward. But still, if you hadn't done that thing, Emily's life would have been much worse.
So, whew! I hope it won't seem presumptuous of me to thank you on Emily's behalf.