Category Archives: Technology

Who Will Education Platforms Liberate?

platform

I’ve been living in San Francisco for a few months now.

During this time I’ve had the chance to talk with some great educational entrepreneurs who are making different platform bets.

A platform is a plug-and-play business model that allows multiple participants (producers and consumers… who may be one in the same) to connect, interact, and create value.

Education platforms are varied.

Some are content neutral: numerous programs can plug in and users can access in any way they want.

Some deliver more standardized content: fully baked competency curriculum, tasks, and assessments – with  more heavy curation of user generated content.

What I’m most curious about is this: who will education platforms liberate?

Platforms could liberate students. They might be better able escape mediocre curriculum, weak assessments, and substandard teachers and get better instruction, psychological development, and career guidance through platforms.

Platforms could liberate teachers. They might be able to better escape terrible district mandates and simply close their doors, plug into the platform with their students, and execute a far better instructional model.

Platforms could liberate school founders. The barriers to entrepreneurship could significantly decrease if a new school is plugged into a platform that does a lot of the heavy lifting in terms of technological, operational, and academic infrastructure.

Of course, platforms could end up liberating them all: students, teachers, and school founders could equally benefit.

On the other hand, platforms might also not deliver and simply liberate investor of their money and educators of their patience.

War! What WAS it Good For?

512hb2d9b3l-_sx327_bo1204203200_

I just finished reading  by Ian Morris.

It is well worth reading.

Morris’ thesis is this:

  1. Government is the primary source of the reduction of violence in societies.
  2. Wars caused societies to merge, thereby increasing the scope, scale, and efficacy of government.
  3. It would have been great if societies had figured out a way to merge without war, but this, unfortunately, has rarely happened.
  4. So, like it or not, war has been the driver of government innovation.
  5. Therefore, wars have been the primary cause of our long-term decline of violence.

Or more fully:

  1. There was a lot of violence in the Stone Age.
  2. Back then, “wars” were just a bunch of back and forth raids that resulted in a lot of violence and not much productivity.
  3. However, then farming came along, which added territorial capture to what had previously been a plundering game.
  4. Once you capture territory, you have to figure out how to govern it in order to extract its resources.
  5. This requires you to figure out how to govern.
  6. When people govern better, violence goes down.
  7. So while wars cause a spike a violence, their long-term impact results in a net reduction of violence.
  8. However, with the advent of nuclear weapons, wars will likely soon become “unproductive” – in the sense that they might destroy humanity rather than lead to better governance. WWI and WWII gave us a taste of where modern war might be heading.
  9. Generally, massive war breaks out when a superpower declines.
  10. The USA will likely decline by 2040-2050. And global warming might also really start causing country collapses by then.
  11. This might cause humanity to destroy itself in a world war.
  12. The best way to avoid this is either to create world government or to turn into robots.
  13. The odds of turning into robots are higher than creating an effective world government during a time of superpower decline.
  14. Or perhaps we’ll muddle through another superpower decline even without a world government or turning into robots. We have survived this long, after all.

Depending on your viewpoints, you might find this historical analysis to be crazy. Or you might find these future predictions to be crazy.

Read the book and judge for yourself.

Personally, I find this historical analysis fairly convincing. As much as I wish it would have been otherwise, war has been the primary vehicle for scaling government, and government has been a boon for humanity.

But I’m surely not an expert so I could be very wrong.

As for the future, who really knows.

But I think we should heed Morris’ cautionary tale.

This Time Might Not Be Different.

The next time a superpower falls, history could well repeat itself, and we could be thrust into global warfare.

All of which surely puts education reform into perspective.

The sound and the furry of over testing will be nothing compared to the sound and the fury of humanity ending.

One last thought: given the above, would it be better or worse for USA to announce that it would never use nuclear weapons?

If you believe that the answer to our problems is maintaining USA dominance until we reach the singularity or create a world government, then you probably want the USA to maintain a credible threat of nuclear war.

If you believe that the USA will decline before we have a world government or reach the singularity, then you might actually view the USA never going to war as the only a way to avoid destroying humanity; as such, you might prefer USA to renounce warfare and simply be peacefully conquered by the world’s next superpower.

Humans Don’t Take Orders from Chimpanzees

chimp

Note: I enjoy writing and discussing things other than education and policy. But I know people mostly read this blog for this type of content. So, going forward, I’m going to post non-education pieces on Saturdays. I hope some people will enjoy (and respond) to the posts; and for those who do not, simply skip this blog on Saturdays.

_

David Brook’s recent column “Our Machine Masters” has the right headline but the wrong content. Usually, with a columnist as talented as Brooks, the opposite is true.

Brook’s argument is as follows:

1. Artificial Intelligence (AI) is exploding.

2. This will lead to smart machines, which won’t be humanlike geniuses, but will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses.

3. This could lead to either a humanistic or utilitarian society.

4. In the humanistic world, “machines liberate us from mental drudgery so we can focus on higher and happier things.”

5. In the utilitarian world, “the machine prompts us to consume what is popular, the things that are easy and mentally undemanding.”

Siri is a Tree, Not a Forest 

The first mistake Brooks makes is that he’s not zooming out far enough. There have been a couple of game changing events in human history (some call them singularities). The first was the formation of language; the second was farming; the third was the industrial revolution. All of these moments fundamentally changed what it meant to be human.

Currently, we’re in the midst of computer revolution. To date, how we store, acquire, and communicate information has been transformed. Other changes, such as how we use data to make choices (which is what Brooks is writing about), will likely occur.

My instinct is all of this is a foreplay. The next singularity is coming, but this is not it. Consider Siri akin to the grunts made before language eventually formed.

Whether this all leads us to be a little more “humane” or “utilitarian” is a minor consideration.

The Singularity is Near

I won’t spend much time here. People much more knowledgeable than myself have written on what a technological singularity might look like and when it might occur. Perhaps most important is that we’re talking about machines with extremely advanced computational power, which will likely render them “conscious” in some sense of the word.

What Technology Wants

An irony of Brook’s column is that is based on an article by Kevin Kelly, who wrote a book called What Technology Wants. In this book, which is well worth reading, Kelly argues that we should view technology in evolutionary terms; that it is a kingdom unto itself (like animals, plants) – that will develop as other “living systems” do.

Viewing technology in an evolutionary frame is useful in that it rightfully takes humans out of the driving seat. Yes, we will impact how technology evolves, just as other species affected how we evolved. But unless we take extremely draconian measures, such as driving it into extinction or putting it in zoos – we will struggle to control how it develops.

The Thing about Evolution: It’s Hard to Manage Up 

Brook ends his piece with this sentence: “I think we all want to master these machines, not have them master us.” Again, the title of his piece is better than the content.

Here’s a question: when’s the last time you took orders from a chimpanzee?

You get the point. Our machine masters will be likely be our masters. They, not us, will largely control how we are impacted by their existence.

It will be their culture; their values; their wars; their mistakes; their emotions that will determine our future.

The Point is This

I’m not an expert in this field. And I used a bunch of analogies and metaphors that are imperfect. But in sum I guess I’m trying to make one point: if you attempt to understand the past, present, and future with humans as your dominant frame of reference, you will misunderstand much.