Sci-Fi

14 Oct

I have some interesting ideas for sci-fi writing.  I’ve thought of most of it as a product of sheer curiosity about philosophy and futurism.  After the fact, some concepts happen to look ripe for narrative elaboration.  Most intriguingly, some of it is wild stuff which may actually happen.

Many of the perennial issues in human living are sure to evolve in exotic ways in the future, and this provides ample opportunity for an interesting sort of historical continuity and reference.  That’s standard fare, but what if new philosophy and practice actually changes man’s relation to his situation and even his relation to his past?  There are lots of possibilities.  At the same time, I always place a premium on compelling originality and striking jaunts of the imagination.  What’s more, I expect the future to be very weird — much weirder than the already zany present.  I don’t know how exciting weirdness can be, but it can definitely be memorable.

It’d be interesting to juxtapose very subjective and freakishly impersonal views of the future.  That chasm could open up all sorts of avenues for exploring games egos play and what it actually amounts to versus what people believe they’re accomplishing.

AI holds a lot of promise for sci-fi.  While much sci-fi has involved AI, I think the very act of building an AI will be far more dramatic and transformative than many folks realize.  It’s not just another engineering project.  It’s much more, and there’s a story to tell there.  The challenge is that whoever tells the story might have to know quite a lot about how to actually build an AI.

It’s an exciting prospect!

 

I’m not interested in writing space opera type stuff.  I don’t think the most interesting part of the future is in space exploration or in fighting cardboard cutout bad guys.  There is certainly much room to explore the ongoing development of present tensions between science, religion, and philosophy as they relate to personal identity, purpose, politics, AI, lifestyles, and so on.  What excites me most about the future is how humans and/or posthumans/thingybobs will field a sense of purpose and a very enhanced self-awareness in a decisively scientific and technical age.  Purpose has always been an esoteric thing, but it usually finds ways to pose as exoteric.

While purpose is a perennial topic, the future promises its own unique and very dramatic variations on this age-old matter.

 

Advertisements

I Miss Science

30 Sep

I prefer hard studies to philosophy in the long run; philosophy is often a response to frustration in life(I grew up loving science and lost the flair in college due to ill-health).  Frustration is present in many famous philosophers: Nietzsche, Stirner, Camus, William James, Schopenhauer, Kierkegaard, Wittgenstein, etc.  Brilliant philosophical books such as the Invisible Man by Ralph Ellison also have this quality.  That book has been near the top of my favorites since I read it in college.  It struck me deep.

Some philosophers might not be particularly frustrated in life, but perhaps they have some traits of autism which may encourage them to become philosophical in order to understand the behaviors they observe in those around them.  Not all of them do, but it’s around.

When I think about the least degenerate and least decadent philosophers, I think of names like Hume, Leibniz, JS Mill, William James, Aristotle, Hayek, and so on.  Good scientists and mathematicians, too.  Einstein was certainly a philosopher, but he’s more famously remembered as a physicist.

There is ultimately no escape from common sense.  In a way, this was probably the life message of Wittgenstein.  The words of philosophy cannot substitute for the rhythms of the world.

Common sense of some sort must precede all elaborate action and plans.  If one merely “does philosophy”, then there’s a danger of mistaking thinking about life for living life.  Sometimes a person is so different and/or frustrated that they are forced to become philosophical.  However, they’ll probably have to look outside of philosophy to reform themselves.  Art, new experiences, meditation, good old habit formation, and so on.

The best philosophy demystifies common sense.  It sharpens it.  It might even be able to remix it.  It is sometimes able to bring one to direct confrontation with one’s limits and choices.  Perhaps it can even trace the paths of subjective emergence.  But it doesn’t take steps for anyone.  It doesn’t tell you who you are, and I’m skeptical about whether it can even tell anyone who they ought to be.

 

Science and technology provide compelling feedback from the physical world.    They offer the means of staging that world to be a friendlier place with more interesting possibilities.

It’s not clear to me exactly how it all fits in with epochs of subjectivity.  This topic teases the space between science, philosophy, and art.  It’s a topic far more advanced than I am smart.  It seems to relate to the nature of intelligence and coordination problems generally.  Complex “diafoci” are established within creatures.  This involves intricate coordination problems in which the whole network has to maintain itself rather like an economy; health becomes a key concept.  The subjectivity involves inherited instincts and overall regimes of conditioning and initiative.  The subjectivity, its history, its environment, and the relations between all of these are essential components of the thing.

There’s also the tricky issue of how one justifies one’s present self-grounded subjectivity without creating all sorts of inconsistencies.  I believe Celia Green stumbled onto issues related to this problem and made what she could of it at the time she wrote The Human Evasion, but there is not necessarily a good answer for it.  It might be a complicated situation involved a strong core and a fuzzy outer region of justification, but it’s well beyond my ability to know.

I figure that these sorts of considerations may turn out to be important for AI and politics.  For now, my considerations are too primitive and preliminary to amount to anything tangible, but maybe a broader perspective can be a jump-start for more practical ideas down the road.  AI will be hard.

 

Technology on its own is very exciting in its potential, but I feel that David Bohm was correct to point out the human propensity for changing something at point A and unintentionally — possibly unknowingly — throwing something askew at point Z (Bohm discusses this sort of thing in his book, Thought as a System).  That sounds like a coordination problem.  I even have a rough word for this kind of outlook.  I call it diarealism to distinguish it from naive 3-D snapshot outlooks.

This coordination problem could become especially tricky when it comes to alterations of the subjectivity itself.

The brain seems to have uniquely powerful coordination capabilities.

 

In the meantime, the people who produce practical stuff are doing the good work.  I want to be one of them.  Philosophy needs to be continually fertilized by empiricism and hard studies to avoid becoming degenerate, anyway.

 

 

Hope

25 Sep

I decided to focus on philosophy because I hoped that it would assist in pointing the way towards a magnificent future.  Also, my mind is eerily optimized for it.  Philosophy alone won’t ever do it, surely, but perhaps it’s not as useless as its detractors claim when taken as one aspect of an intellectual whole.  I felt that any tragedy at all was still too much, and I hoped philosophy might give a glimpse of a perfectly civilized future.  This requires, among other things, an intense and potentially awkward study of human subjectivity.  How good are we at knowing how to help, for example?  I decided to not worry about how awkward it might look from the outside.  I figured that my mind would not be free enough if I worried about such things.

Have I gotten defensive about the freedom to let the mind go out of normal bounds?  Yes!  I felt that it was very important to do that kind of thing.   We won’t see our backwardness until it’s in the rear-view mirror.  Hasn’t it always worked that way?  So, I feel that making present assumptions the unassailable standard is a fitful thing to do — I believe doing so will eventually make losers out of all of us.  We need a few loony philosophers and artists, right?  Some may argue that it’s all basically relative and hopelessly fragile, but what if there is direction to it?  Do we really know until we dig in?

Aiming for Utopian shores is an absurdly ambitious and potentially futile goal, but I found it very motivating.  I never thought I was nearly talented enough to do it, but it’s about all I could believe in.  Here we are in this crazy world.  Why not try to shoot the Moon?  And Mars, of course.  What else is there to do, anyway?  How many times in life does one hear, “that’s just the way things are”?  It doesn’t feel right to say that when your Grandma is going senile or when children are starving.  I got tired of the half-assed attitude towards life.  There are plenty of examples that anyone could think of.

There is, of course, the wonderful sort of work that Bill Gates and many others do.  It’s amazing, and I hope everyone on the entire planet is eventually behind the push.  Love can never be too powerful!

I also found myself wondering how systems of life work.  How do the processes of these systems result in various outcomes?  That’s more like the way a biologist, game theorist, and/or complexity scientist might see it.  It doesn’t really sound moral, but I feel it’s extremely important to recognize the bare physical aspects of these systems.  Without that, I don’t know how it’s possible to have a lasting effect.  Engineers may not be ethicists, but good luck delivering the love of bounty without em’!  I find that attitude appealing, because I think there is a substantial human bias towards more “obvious” sorts of sentiments.

I thought that there was a subtle problem in merely saying, “this is the right thing to do!”  I know when something feels right, but I found that my sentiments could mislead me.  I felt various actions ought to be interpreted in light of the finite(ie, mutually exclusive) ability to do x, y, or z at a given time.  It’s easy to bias in favor of the short-term, for one, but this kind of outlook is not necessarily a reliable long-term approach to quelling tragedy.  As such, I thought it might be very important to consider “counterintuitive” possibilities.  That means risking a mind so open that the brain occasionally flies out with wings and a cowboy hat.  And there are streamers, too.  It happens.

I suspect that most human conflict is little more than a pointless relic of our history, instincts, and ignorance.  Much of this behavior is not “rational” in an age on the cusp of AI and advanced genetic studies(if one assumes these items really are near).  Why not?  Because the Universe makes the (meta)rules.  Our DNA, our bodies, and our minds are all running on top of the world.  As far as I can tell, the most wonderful and potent way forward is unlikely to be strictly relative to any given primate’s present DNA.  I bet that we’re all going to be equivalent at that finish line.  It could be a quantum leap of partiality.  I’d imagine that it could eventually be figured out without a bunch more horrendous conflict, too.  Computers, science, and maybe even a dash of philosophy might just do the trick in time.  It makes much more sense for everyone to work together so that we can understand our position and grant future humans, creatures, and so on, a world much more wonderful than our own.

It’s easy to get lost in the nasty parts of the thing.  Evolution has been a brutal process, and it still seems difficult for human beings to set things right.  It’s easy to succumb to self-pity and false macho(Whoa, look at this picture of animals eating each other alive!  Darwin and stuff!).  But maybe the view is very different if one summits the local peak in order to extend the horizon just that much more.  Imagine what might be seen from the highest peaks.  What might one see from an intellectual spaceship?

Put another, more provocative way:  what kinds of instincts would extremely advanced and civilized beings have?

 

I’m quite loose with the humor, but I never mean it personally.  If it’s in bad taste, it’s only because I’m ignorant, idiotic, and/or oblivious.  I don’t usually worry a lot about my personal image  — partly because I wanted to keep my mind as free as possible(that whole subjectivity thing again…).  I don’t care a whit about superficial differences in people, and I find it redundant to say things like, “I respect women.”  Even saying it feels odd to me.  Of course I do.  I take it for granted!  I find it whimsical to say or to think such things.  Sentience is sentience.

The whole method of trying to get beyond typical subjectivity naturally lends itself to misinterpretation, as the general expectation is that everyone is following a given standard of (subjective) common sense.  My concern, however, is that it is often these very standards of subjectivity which suppress higher insight.  We might eventually get a rigorous expression of this.  Thomas Kuhn and Celia Green, among others, have already staked us to a hell of a start.  There’s Bayes, but I think there will be more than that.  Subjectivity is an incredible philosophical topic for a variety of reasons, and I think there’s a lot more to learn about it in the flesh and in the soul.

A key observation about subjectivity: it is necessarily finite.  As such, it can never make all the leaps at once.  It’s reminiscent of William James in the Will to Believe: there is a budget of skepticism and hand-sitting.  We have to make a lot of rough assumptions and commitments just to stay alive and keep things running.  Surplus affords us some luxury of skepticism, exploration, invention, etc, and sometimes that leads to dramatic reconceptualization of what it means to be human.

 

Leadership

23 Sep

I’ve recently come across some articles on leadership, and I find myself wondering why there is not more mention of trust in these articles.  A leader with earned trust can do amazing things. The leader might earn this trust with competence, consistency, and compassion.

Distrust breeds pessimism.  This has feedback effects: less investment, less initiative, less commitment, less profound organization, etc.

Is being trusted a privilege or a right?

 

 

Coherence

12 Sep

Note:  One of my email accounts might be compromised.  I can’t be sure at the moment.  It’s a boring email account, but I have to protect myself against possible imposters.  Any real emails from me are guaranteed to be 100% boring and civil.

 

I’ve made a lot of progress in my personal outlook over the past two weeks.  It surprised me.

One big idea is that consciousness is a product of highly complex coherence. The brain is largely dependent on its environment for this coherence.  This observation may appear inert, but it would seem to indicate that the so-called “phi” depends on many levels of integration — many of which are completely outside of the flesh and blood human being.  One might view the Universe as a massive optical array generative of different categories of lenses(eg: gravitational, chemical, genetic …).  Some of these require very long exposure times.

Somehow, this symbiosis flips a switch for consciousness when the coherence becomes focused and usable within a person.  I’m pretty sure it’s a big deal, but it requires much more serious and rigorous investigation before any dramatic claims are made.  Dramatic speculation probably must precede dramatic claims, however.  A striking possibility is that consciousness is related to what might be called “context objects”, which pool inertia in harvestable ways.

This also led me to think about the factors involved in the rate of production of tightly focused, highly complex coherence.  The well-bred and well-functioning human brain seems to generate “novel” coherence much more rapidly than evolution does, for example.  This point was especially poignant for me, as I recently took the online Big History course.  The course puts a great deal of emphasis on “thresholds of complexity”, and it seems to fit with my thinking in this regard.

For now it’s but a tantalizing speculation.

The Devil’s in the Details

5 Sep

For richly meaningful communication and technology, acute specification is a necessary condition.  This specification implies an empirically informed semantics, along with the formal notations and methods which complete it.

Making specific, interesting, and novel claims about physically accessible stuff in the real world raises the stakes very quickly.  A rock either floats in Crater Lake or it doesn’t.  Lightning is either physically related to static electricity or it is not.  A verbal command either produces the desired response or fails to.  A scientific equation either produces correct, unique, and useful predictions or it doesn’t.  A drug either produces statistically significant effects or it doesn’t.  Either Google Maps can find a practical route from A to B or it can’t.  Either a product is profitable for a company during a given period or it isn’t.

Being originally specific and bold in this manner is an excellent way to rapidly gain or lose credibility.  Furthermore, specification of hypotheses enables a paring of the search space.  Even if I get something unequivocally wrong, at least I’ve learned about one definite dead-end.  If I never raise the stakes of specification enough to risk being blatantly wrong, then I might leave myself fields upon fields of weeds to frolic in.  That’s a special kind of freedom.

(The relation between the concept of health and the concept of specification might be interesting.  I mention this here as a reminder of the complexities of certain physical questions.)

It’s obvious enough, but I suspect this intuition about getting down to relevant specifics tends to get buried outside of particular experiments.  Some scientists seem hostile towards philosophy today, and I suspect it’s this intuition about high-stakes specifics that undergirds part of the animosity.  Scientists realize that many people who call themselves philosophers are employing horribly vague language which can be easily reinterpreted in an ad hoc fashion as suits the agenda of the philosophers.  There seem to be infinitely many free parameters.  Scientists are supposed to winnow possibilities as much as possible instead of accumulating them like signatures at Disneyland, so it’s not surprising that they’d take issue with this behavior.  They realize how unproductive it is.

Meanwhile, a so-called philosopher might throw around a lot of important-sounding words like “the infinite”, “morality”, “consciousness”, “intentionality”, “fanny”, and so on.  This type of rhetoric can make the words of the “philosopher” seem very important and wondrous — even as the words may be effectively meaningless.  The words play fast and easy with basic intuitions and emotions that we all have.

However, this idea of stakes has not been completely lost on philosophy, and William James’ The Will to Believe is a wonderful example of philosophy at its best.  I don’t endorse everything in the essay, but there’s no doubt that WJ knew how to usefully get to the point.

Good philosophy has to somehow connect to bearing stakes, otherwise it will succumb to the most damning criticisms of metaphysics.  If philosophy isn’t putting anything on the line and isn’t providing anything original — especially if it can’t, structurally speaking — then it has no good answer for the criticism.  Its stakes will not be identical to those of science(if so, philosophy is redundant), but it’s got to have something analogous.  If there is no sense of an arrow of time in philosophy, then there might be something rotten in Kierkegaard’s Denmark.  Ew.

Maybe it’s like the 20 questions game.  The early questions are intended to knock out huge regions of the search space, but they usually yield general information instead of highly specified data.

 

The Will to Believe

3 Sep

Viable living things necessarily have to act without knowing the full nature of reality.  Any abstaining creatures or Kierkegaards will not contribute to the genetic future.  Furthermore, a failure to act isn’t necessarily less harmful than acting is.  Is there a moral tragedy in this state of affairs?  I feel there is, but I don’t see any way around it.  It looks to be the way nature works.

There are several possible approaches to the matter(not necessarily mutually exclusive):

-Deny the importance and/or the reality of the conundrum.  “Who cares!”

-Assert that we know enough to be confident about the goodness(or badness) of our actions.  “Things are getting better! ”

-Deny the need for complete knowledge.  “I know enough about the here and now to confidently act.  Let those who follow deal with their times in due course.”

-Struggle with the issue, but refuse to look at it squarely even as one obsesses over it.  “If I only write 100,000 more words and say eleventy more prayers, something might change!”

-Accept the importance of the issue and regard it as a fundamentally immoral situation.  “I’m a Buddhist monk.”

-Accept the gravity of the matter and decide that it’s at least moral to pursue more understanding of the situation.  “I’m Gnostic.”

-Assert that the whole situation is a sorry and amusing matter of a deluded human consciousness taking itself too seriously.  “I think you invented this ethical issue just to feel important, and I spend my time mocking you.”

-Skirt the issue via constant diversion.  “If a tree falls in the woods and I’m too drunk to know that I’m in the woods…”