Sunday, December 21, 2008

Memory Leak in ActiveMessaging

ActiveMessaging is great. It allows you to easily hook up to ActiveMQ to offload all your batch processing needs. Only problem, is it eats memory like crazy. Just hookup a simple queue with a publisher and consumer, write a few hundred thousand tickets, and watch the consumer eat all your available memory (will quickly eat a couple hundred megs and go on to use more than a GB).

In the gateway.rb, there is a dispatch method that routes the message to the appropriate processor:


def dispatch(message)
@@guard.synchronize {
begin
prepare_application
_dispatch(message)
rescue Object => exc
ActiveMessaging.logger.error "Dispatch exception: #{exc}"
ActiveMessaging.logger.error exc.backtrace.join("\n\t")
raise exc
ensure
reset_application
end
}
end
If you comment out the prepare_application and reset_application the memory consumption stops. You can chew through millions of tickets and stay at a steady usage. Only problem is that now, ActiveRecord will not keep its MySQL connection fresh, aka you will get a MySQL::Error: Mysql has gone away

These methods seem to wedge deep in rails' dispatch foo. Somewhere in there, it is likely doing validation on the connection. So, the trick will probably be to override the process!(message) method of the base processor class, and rescue MySQL::Error and call ActiveRecord::Base.verify_active_connections! and retry.

I will update this once I can validate it to see if this fixes the stale connection issue and if I run into any other issues, or if any kind commenter leaves the answer.

Monday, November 17, 2008

Entrepreneurs, Love, and Signals

Intelligences gets things done by planning, acting, and then making continual adjustments. In order to be successful they must be able to evaluate their current course against where they were planning on going. If things are off, they adjust, change directions, speed, etc. to get back on the right path. As the old tired quote goes, “How do you get to the top of a mountain? Just make sure each step leads you higher”.

Now we get signals along the way that indicate how we are doing. A simple signal would be a measurement. Are we higher or lower than last step? This can then be compared with the desired goal if it is itself measurable (I want to make it to 5000 ft). In this way we can make adjustments with each step. We can even make sophisticated decisions like should I go down 10 steps to the bridge to cross so I do not have to go down 1000 steps to get across a gorge.

Not all signals are equal. Some goals have very good signals (like in our example of height). Others have much more complex signals. A good example of a goal with complex signals is love. Love is many things to many people, but usually has something to do with prioritization of another over ones self, or at least over others compared to the one loved. But this notion of prioritization is very complex and has to do with hundreds of social conventions and traditions. Even amongst personality types we see differences in what is perceived as meeting this goal.

In fact almost all intra-personal relations are subject to these same complex signals. From friendship, to family, to gift giving, politics, and leadership. All enormously complex with a multitude of signals that very in their significance from person to person and even from time to time for the same individual.

Now a given intelligence is only capable of so much planning and signal processing. Like any other resource, planning, adjusting, and signal processing are subject to the limits of time, processing power, and material resources that can be used in goal attainment. This is a two way street. Interpersonal relations are most commonly a very intimate one-on-one type of goal. Even the best individuals a re by themselves capable of attaining goals with only a very few set of relations (relative to the total number of people in existence).

Now in any society of individuals inevitably a system of trade (market) comes into play . People produce and consume goods and services and through specialization come to use a common medium of exchange called a currency or more commonly money (This is obviously a very specious treatment of the subject. For those so inclined I highly recommend the books Human Action and Theory of Money and Credit by Von Mises, and Man, Economy, & State by Rothbard). Prices (the costs in money of specific goods) becomes a universal signal for making decisions relating to the market. Prices adjust relative to supply and demand for any specific good, which reflect the aggregate desires of individuals across a huge section of every day life (remember labor is just as much of a good as any other).

Entrepreneurs use these market signals (prices) to attain their goal (profit). In so doing they fulfill the desires of a great number of individuals in addition to their own. By creating a giving good and with an advanced distribution network as we have developed today, can impact the lives of millions. A humble farmer can now, with current technology, produce enough food for hundreds of individuals. A clothing designer can create fashion to be enjoyed by millions.

Markets effectively offer a very simple signal which masks the complexity of the underlying system of interpersonal signals. By using the simplified market signal, we are able to achieve a scale far greater than that which is available to us on a direct interpersonal level.

For the few interpersonal relations you have resources for, by all means work towards your goals. They are greatly rewarding. But if you want to maximize your goal obtainment, then use the market's signals and produce!

Sunday, September 28, 2008

Meta and Politics

Sometimes we as humans are drawn to simple little systems, where the rules are clearly laid out, and the basic axioms are short. Things like Newtonian physics and Fibonacci sequences. When these simple systems have the ability to combine their rules in an open ended manner, the results can become quite complex, even though the ground rules are simple. These systems are happily adapted to computers which can handle insanely detailed application of these rules.

Reality however, is rarely one of these systems. Real systems tend to be much more complex. Intelligence for example, is certainly not readily describable in terms of a few basic axioms. Categorization, pattern matching, and allegory are all heavily dependent on the Meta. Anything dealing with general intelligence, communication, or flexible systems tends to be heavily imbued with Meta rules, that is, rules for describing rules.

When a system at its base is not a set of rules applied to the direct problem domain, but is instead made up of ground rules for the creation of rules depending on the current situation, potential for complexity soars. These meta rules allow the system to be very adaptable, and tends to fool our overly aggressive causal tendencies by producing counter-intuitive results. In fact, many times these Meta rules are completely hidden to the outside observer, masked by the generated rules for a given circumstance.

Many Meta systems are inherently recursive, not only allowing for several levels of rules, but also rules that depend and are defined in terms of themselves. This creates a situation in which all the rules can be explained, but the implications of those same rules is highly unknown (even when they are not hidden). The lines between Meta-rule and rule tend to blur.

Human intelligence appears to be highly meta, insanely complex, and capable of producing very unexpected results. Economics is the study of these systems in action, in parallel with billions of other similar systems all interacting at once. The resultant complexity astounds the mind. When so called "economists" are called upon to make predictions as to the future state of this massive interaction of complex systems (itself now a new monster system called the economy), one can only laugh, and shake one's head, as we watch the circus of those pretending to be the masters of meta fall down again and again.

So create your wrong headed doomed "Rescue" bills and regulations. Sell it to the public as a snake oil salesmen fleeces the poor and down trodden. And as disaster strikes yet again, sleep tight with the defense of "it couldn't be helped!", and "no one can blame us, we listened to the experts!".

Sunday, September 14, 2008

Meaningful Phrases

I am trying to recall some of the moments in my past where I had or read a particular thought that produced or summed up a great deal of the way I think. Here is a sample of what came to mind.

People are fundamentally selfish. They act according to their most "perceived beneficial" action at all times. I was first exposed to this concept formally in C.S. Lewis' Mere Christianity. It was later spelled out in exhaustive detail by Von Mises' Human Action, and Murray Rothboard's Man, Economy, and State, and makes the basis of the Austrian School of Economics.

The Utopian vision of man vs the corrupt image of man. I first read about this in Thomas Sowell's book Basic Economics. Basically is man approaching some long away super society where he corrects his flaws and lives in harmony, or is where we are now basically where we have been all along and will always be. Basically realizing that you could split most political, moral, and social systems/thought with this one question. I tend to believe in the corrupt version.

The amount of information in a signal varies directly with how random it is. Put another way, the ability to compress information varies indirectly with how random it is. As an example, the particle movements of a wave crashing on the beach has an incredible amount of information in it. It would take incredible amounts of computer horse power to simulate it exactly. The main interest for me is that complexity is not in of itself useful or powerful. The more complex a system is not fundamentally better, and the reverse is more generally true.

Humans are rationalizing creatures rather than rational. The world is far more random than we think. This was an intuitive hunch for most of my life, with pointer's from Steven Pinker's books on the mind, but laid out beautifully by Taleb in The Black Swan, and Fooled by Randomness. People are fundamentally wired to see cause and effect in every situation and believe they know these causes, even in very complex systems. Humans are continually fooled by selection bias, and we share a universal inability to handle probabilistic thinking. This explains hero worship, our inability to learn from history, and explains the extreme epistemic hubris of most people. The apeal to me and the defining characteristic of the Austrian school of economics is its humility in regards to complex systems and its reluctance in determining cause and effect in history.

These are a few that came to mind this afternoon, what are yours?

Monday, August 18, 2008

Priming Surveys

Checking out frogmetrics.com and thinking it is really cool, but reminds me...

How many times have you seen a survey with the answer being from 1-10 or 1-5 stars? Every time I come across one of these I always wonder how in the world they aggregate these responses in a meaningful way. A very level person will answer most things close to the center (every thing is a 4-6) where a excitable personality tends to hit the extremes. They both likely meant the same thing by their feedback. The same thing goes with netflix movie rankings for example, and then they try and give me recommendations based off of what other users felt. Problem being is that the level heads and the excited are all mixed in together.

So is there a way you could get a sense of what personality they are to help classifier their answers? What if you asked a single question somewhere in the survey that was a "primer" question. Something that has a decent emotional response, like how would winning 100 dollars make you feel 1-10?

Friday, August 15, 2008

Named Parameters in Ruby

Ever forget whether the name or email parameter is first on a method like this?

signup(name,email)

Named parameters can help out so that you don't have to remember the order of parameters. As a fringe benefit the code can read a bit nicer and you can add in new optional parameters without breaking existing code.

Ruby as a language doesn't support named parameters to functions. But it does support converting named pairs into a hash if you provide an argument for it.


def signup(params)
name=params[:name]
email=params[:email]
...
end


This takes a little more work on the function declaration but it's not too bad. Now we can call the function like this:

signup(:name=>'Me', :email=>'me@net.com')

Suppose you wanted your name parameter to be optional and default to the email parameter. You can easily set default values for one or more of your expected parameters:


def signup(params)
email=params[:email]
name=params[:name]||email
...
end


With named parameters it often behooves you to do a bit more parameter checking.


def signup(params)
email=params[:email] || raise("email parameter is required!")
name=params[:name]||email
...
end


To make all parameters optional, set a default value for your parameter to {}.

def optional(params={})

Sunday, July 27, 2008

My mind is out to get me

Maybe I am overly paranoid, but I think my mind is out to get me. Often times, I am woken up at night by my "I have to pee" agent of my mind. It gets me stumbling downstairs, only to realize, I didn't really have to go. Then it comes to light that I am actually really thirsty. Apparently I ignore my "gotta go" agent less than my "your thirsty" system, probably due to the consequences of ignoring each.

Do agents of your mind lie, cheat, and steal to get you (your agents actually) to do their bidding? Did my "thirsty" agent abuse my "gotta go" agent to do its bidding? Is your subconscious mind a scheduler that relies on agents telling the truth? If your agents start lying to your subconscious to affect behavior, who can you trust? Excuse me while I go lie down for a while, the world just got a little scarier for me.

Life recording and Infinite regress

What if you could record your life all the time? You would then start reviewing those recordings so you could improve at whatever you do. And then you would obviously want to improve your reviewing process (as its something you do) so you could become better at whatever you were doing in the first place. This could go on for a while.

Likely the law of diminishing returns would kick in and you would stop before too many meta layers. How do we as intelligent beings so effortlessly make these decisions? Just reading this probably makes you wonder what kind of idiot I am for even positing such an absurd situation. Maybe we are protected by a mechanism of boredom. Seems like this would be an effective deterrent to the infinite regress in many forms. How would you design a machine that gets bored? Or I suppose this is the same question as how do you build a machine that has interests?

Thursday, July 24, 2008

Ruby Batch Processing: ActiveMQ and ActiveMessinging

Lets say you have some long running background tasks that are triggered by user actions. For example, you may want to allow users to upload a list of bookmark URLs that you will create thumbnails for.

Heres the sequence of events we are looking for.


user posts list -> rails controller -> create ticket(s) -> queue -> (offline) processor(s) do work


A message queue is a good solution for this type of setup if you want to have a number of processors that you can simply start more of to scale. ActiveMQ and ActiveMessaging using STOMP make a simple ruby/rails solution.

ActiveMQ
ActiveMessaging (A13g)

Create our processor

class ThumbnailProcessor < ApplicationProcessor
# - using ActiveMQ STOMP extension prefetchSize
# to only take 1 ticket at a time
# - set ack to client or else prefetchSize won't
# do any good
subscribes_to :thumbnail,
{ :activemq:prefetchSize=>1, :ack=>'client'}

def on_message(msg)
# stub for long running code that
# creates thumbnail from a url
create_thumbnail(msg)
end
end


Create our controller

class ThumbnailController < ApplicationController
publishes_to :thumbnail

def create
# create a ticket for each url in list
params[:urls].split.each do |url|
publish :thumbnail, url
end
end
end


Configure ActiveMessaging (config/messaging.rb)

ActiveMessaging::Gateway.define do |s|
s.destination :thumbnail, '/queue/thumbnail'
end


Now spin up as many processors as you need (you can always start more later)

./script/poller start
./script/poller start
./script/poller start


and you are ready to roll!

Sunday, July 06, 2008

Emergent Behavior and Software

Recently watched the interesting movie Idiocracy by Mike Judge. The basic premise is that intelligence reaches its pinnacle in evolutionary usefulness sometime during our recent history and that it is generally reduced in the next 500 years due to selective pressures to the point that the mean IQ is just about barely functional. This is an interesting idea based on the selfish gene theory and a rather non-blank slatish theory of mind that says most of what we posses in the mental department is god given. This allows natural selection to determine genetically what will be the dominant and useful traits for survival, reproduction, and ultimately the mean IQ of the general population.

Now there is another notion of selection that occurs not at the gene level, but at the idea level. This is often refered to as memes, either on a social/cultural level, or at a more basic story level. In this model, what survives is not the hardware but the software. If this model is true, then what will determine the daily lives of people will not be the quality of their hardware, but that of their software. What ideas, theories, algorithms will survive?

Tuesday, July 01, 2008

Monday, June 30, 2008

Polish

What is polish? Polish is taking things to their logical conclusion. It is tying up all lose ends, fully exploring all possibilities. It is solving the essence of the problem and documenting exactly how to reproduce the solution.

Polish is about simplicity, elegance, and removing clutter. It is removing the pieces of stone from the statue within the block of marble. Polish is exceeding expectations, going far above the norm. Polish is felt in a well designed product that just works, that fits in your hands comfortably. In the real world, these are items of just the right heft, perfect shape, and the simplest thing to get the job done. In the virtual world, they are novels that not only illustrate an idea, but put it into practice live before you imagination so that you feel as if you have lived it. Or they are mathematical models so eloquent they make you cry out that you did not see it first yourself!

Now there are those who will tell you that you should just do something, get anything in front of people so that you don't waste a lot of time creating something perfect that nobody wants. I would tend to agree, but with one qualification: what you put in front of them must be usable. If it is a video game, it better shoot. If it is a web app, it should allow you to do one thing well, and do it completely. If it is a mathematical proof it should be the simplest version of it that is still comprehensible.

How do we deal with complexity? With simplicity. Crush every problem down to the very root of the issue. Address this most fundamental problem in the simplest yet still helpful way, then iterate. Make sure at each stage you offer a complete solution to the simplest statement of that particular problem.

Technology allows us to economically pursue polish as never before in history. Now that one man can create ideas and products that are used by billions, no amount of effort can be wasted on perfecting the right idea. Your idea must compete with literally every other on earth in the past and present. Increasingly the only way to do this is with polish.

Communication of ideas between humans is hard and this will not change. With polish, rise to the challenge and transfer the essence of what the you have created into the hearts and minds of others. While all this means much more work for those who produce, it is an amazing time to be alive. We are to be witnesses and participants to the most complete, the most usable products and ideas from any time in history. Create things, and create them well.

Sunday, June 22, 2008

The Appeal of Authority

Why does man crave authority? Why does man crave certainty? These two questions seem to be the restatement of a single human desire: security. We all crave security in various forms. Few can feel any long term happiness without it. Our longing for the known can drive reasonable men to very unreasonable actions. I have witnessed countless times otherwise sane persons throw all intellectual integrity to the wind for the simple illusion of security found through claims of certainty.

Want to influence men? Want to persuade others? Deal in certainty. Sell security as your commodity. Provide confidence in claims, guarantee your promises, and smile without flinching. How many times have we seen hucksters one job ahead of the curve, one promotion away from all their failed promises.? No one seems to remember long. Those around them simply eat up the certainty like candy and care less that the claims didn't pan out than that they were made with certainty and authority.

The selection bias provides us with our historical winners. Make enough authoritative claims and a few will turn out to be true. If you are lucky, you may even go down in history as a great visionary.

This works for blogging, the workplace, and especially in personal relationships. Go forth, provide authority, and make snap decisions with absolute confidence! You will be materially better off, and who can fathom the cost or currency of the soul?

Sunday, June 15, 2008

Presence in the Cloud: It's not just for Buddy Lists Anymore!

I have been a huge fan of IM applications for a long time. There is just something wonderful about a dynamic list of agents and activities all flashing away down there on my taskbar. Presence is one of the cooler things to come out of the IM wave. But what happens when more than your friends start advertising their presence?

Imaging a grid of virtual instances, all advertising their current state via presence. They let you know load, health, activity, and heck they could even tell you they are bored. Now imagine another set of agents that are watching these node statuses. When a node is bored it gets spun down. When a component reports ill health, a ticket is issued for maintenance (hey IT guy, replace drive 145 in array 6).

What components do we currently write that would benefit from presence?

Scaling the Beast (Time and Hardware)

These are interesting times. Computer power keeps increasing while at the same time falling in price. Bandwidth continues to expand. As a result of all this horsepower languages continue to proliferate. I have in the last few years spent a lot of time with ruby and python, which are two very powerful and yet in relative terms, slow running. The great thing is that it no longer matters.

But this is old news. Now it feels like we are heading for an even bigger shift. The virtualization and commoditization of data centers is in full swing. Companies are offering full hosting stacks as a full blown service. Check out AWS, Mosso, 3Terra, and App Engine for just a taste of what is coming. These offerings include not only pay as you go hosting and hardware services from a very small scale up (some even start at free!) but are also starting to offer powerful software services such as data (not just file) storage, message queueing, and content delivory.

Software components running on the grid, storing their data on the grid, being accessed from other components on the grid. All running on this virtualized platform. No OS patches, no external security concerns, brain dead deployment. Point and click (or invoke your batch script) and send your component to the sky, spin up instances on demand, and revel in an ever approaching complete abstraction from almost all of the scaling, cost, and security issues that plague web developers today (twitter anyone?).

Because of this up coming shift away from hardware instances and to a component or service based computing model, many language concerns are slowly becoming obsolete. As virtual machines continue to gain popularity with developers, OSes and compile time dependencies are slowly being replaced by platforms. These platforms behave almost the same regardless of the underlying system. They all have the ability to make and receive network calls to provide and consume services. As a result, more and more services are being accessed via the network using simple RESTful or other largely text based apis. This frees the burden on languages of writing, porting, and maintaining each service for every single language out there. As a result, standard libraries can shrink, and services grow.

CouchDB, Solar, SimpleDB, Google's Database API are all fine examples of this from a data perspective. The Stomp messaging protocal to talk to JMS services, XMPP for instant messaging, presence, and a host of other interesting applications. IMAP, POP3, SMTP and many other internet protocals that have been around for eons. All these protocals allow components to expose and consume services without having to worry about many of the complexities that have been associated with component based architectures. Not only does this let you only write one api that can be consumed by any language, but also skirts the memory management issues of making calls to libraries in process. Issues like who cleans up the stack? Who allocates and frees which memory? All thorny issues that for the most part simply get in the way and soak up brain cycles. As a secondary effect, it sets up each component to scale out. Many bad habits that developers can fall into just aren't available to a loosely based component architecture. There are firmer parameters that when followed allow for near infinite scaling (disregarding cost issues for the moment). These parameters encourage good programming practices (horizontal scaling, loosely coupled, separately versioned components)

As more and more functionality moves out of in process libraries and into network components, developers will be freed to make language decisions not on libraries but on the syntax of the language itself. This should be good for everyone. There is no one size fits all language, and developers all have different personalities. The ability to choose platform based on components rather than virtual machine, language, or OS is something I am personally really looking forward to.

Sunday, June 08, 2008

Stationary Optical Drive

It seems stupid that we spin disks mechanically several thousand times a minute. Lots of work, lots to go wrong. Generates heat, noise, and has hard limitations on how fast we can read data based on how fast we can spin a physical object.

What if instead we read data of a stationary platter with a stationary reader. We direct the reader using a similar method to a CRT. Basically we have two electromagnets, one for the x and one for the y axis. This shines the light onto the desired position of the platter, which then reflects onto a reflective bowl with a focal point of a sensor.


+---- Sensor
v
-------S------- <-- Platter
\ / <-- reflective bowl
\-----|-----/
^
+---- read laser and CRT like aiming device


We would probably need to sustain data rates of over 1GB/s and storage comparable to existing DVDs to be interesting.

Sunday, June 01, 2008

Brains, Networking, or Luck?

I have been through several interviews in my life where I just couldn't believe I didn't get an offer. I was smart, quick witted, and nailed every question. Then you wait two weeks and get a two sentence email in your inbox saying they are going a different direction, thanks for playing.

Rejection never feels good, and you often wonder what happened, what you could have done to influence the outcome. Truth be told, it often has more to do with how your interviewers were feeling about themselves during that fateful half hour. Are they late on a big project? Did they eat that huge burrito with extra hot sauce again? Did they just get a raise and think they can accomplish anything with just a few more team members?

In an interview you basically have the span of an hour to evaluate how competent and amazing of a person the candidate is. Will this person make my life and the company's future significantly better? That's a tall order in that short of time. In fact, it's really bordering on impossible. Unfortunately given the reality of the situation, what usually happens is more or less luck. While the smart and the connected certainly have an edge up in the odds, all in all I would be surprised if luck did not have a significant if not the majority role in most decisions.

A key point to remember is that when you are evaluated for a job or really anything else in life, you are not being evaluated or judged. Your name (possibly), a few run-on sentences from your resume, and a couple flashes of memory recalled from your blip in that person's life are what is really being judged. This is both comforting and frustrating. Remember, very few things in life are an actual rejection of you. They are almost invariably a rejection or acceptance of a few bits about you that may or may not be true, and a whole pile of near random conditions.

Harder still is to keep all this in mind when hiring. All I can really come up with is to make sure you have multiple interviews on different days before deciding on an offer. At least this way you can level out some of the curve of what you ate for breakfast.

This all reminds me of three very different world views out there. The one where the "man" controls the world (think Marx), the one where the mob controls the world (think Rand's Atlas Shrugged), and the one where randomness rules them all (think Taleb's The Black Swan). I have held a variation of all three views at different stages in life, which one are you?

Sunday, May 25, 2008

Uncertainty, Arrogance, and Software

Why is software so hard? Why do projects routinely run over budget, over time, and require more people and hardware than forecast? Why is software development so far behind the other engineering disciplines when it comes to projects and budgeting? These are common questions for anybody working in or with the software industry.

Programmers will usually talk about complexity, changing requirements, mis-management, or pace of technology. While these are all real issues about why writing software is so hard and time consuming, it doesn't necessarily address why we are so often mistaken about the amount of effort a given project requires. Even being fully aware of the afore mentioned problems, most programmers will still give you a rosy eyed estimate that will be woefully off target (and almost always in the under estimating of the time and resources required for a given task).

I have been working at this for over 10 years now, and I am sad to say I don't feel any better at this than when I started. All I have learned is a healthy respect for the amount of time and effort those little boxes "encapsulate", and a near phobia of commitment to estimates of how long any given "piece" will take.

Central to the issue is our biological need to feel that we understand a situation in order that we might act. It seems a deep seeded reality that we are wired to generalize, categorize, and abstract messy reality so that we can make some forward progress in this crazy world. We are masters of this, and whisk away ugly detail with ease, boxing up thorny problems with the stroke of a dry erase marker creating orderly corrals where our shiny new abstractions live in words like "storage", "service", and "analysis".

When you are dealing with tigers jumping out of the woods at you or a fist speeding towards your face, this ability to take a messy situation and make it simple is a very useful trick. While pontificating about the underlying motives of the fist wielder or determining the exact species of tiger hurtling toward you may be all excellent strategies for handling the situation if time were no object, in our reality they just get you dead. In our reality, over estimating the amount of knowledge you posses in order to more quickly come to a decision can be a huge advantage when time critical situation are on the menu. In fact, this was the usual case until fairly recently in human history. It's no wonder that we are geared to handle these types of situations given the frequency and risk/reward of each type.

Unfortunately, this can leave us arrogant yet delusional, which can be a pretty nasty combo. Our tendency is to quickly feel like we understand a situation and are ready to act on it, while in reality we have very little understanding of the actual problem. This leads us to over estimation of our understanding of the factors involved which leads to an under estimation of the time involved (since we have to not only implement what didn't understand but also figure it out). This combined with the inconvenient tendency of reality to be more fractile than linear in nature means that things tend to get more complicated as you investigate them. The devil's in the details as they say.

For a much better account of our epistemic arrogance as it relates to many interesting (non-software) issues, check out Nassim Taleb's excellent book The Black Swan.

Sunday, May 18, 2008

Google App Engine


If you haven't checked out google app engine yet you yeally need to. I wrote my first facebook app backed by app engine. The barrier to entry has now actually dropped to zero. It is amazing: marketing through facebook fiends is free, the host and database are now free, and all the tools required to write each piece are free. I look forward to seeing what comes out of this emerging platform.

Sunday, May 11, 2008

Nuclear Power and the Cost of War

While doing a little BSing with a friend on IM about the costs and merit of war for oil, started wondering what it would look like if we instead spent this money on nuclear power plants. Now perhaps this isn't exactly your favored solution, but relax, I don't make policy, and this is just a thought experiment.

According to the ever popular Wikipedia we in the US consumed 29000 TWh of juice in 2005, equivalent to an (average) consumption rate of 3.3 TW. Likewise we are informed that the capital costs of a new plant are in the 5,000-6,000$ range per kilowatt, or lets say 5.5$/w capacity. Now, in our little war on Iraq we are likely spending around 4 trillion dollars (see past posts for references). A little math reveals that would buy us $4,000,000,000,000/$5.5 * 1w ~ 730GW of capacity. Given we consume around 3,300GW of energy, that would be about 730/3,300*100 = 22% of our needs. Now obviously we would have to pay to run the plants, guard them, and dispose of the wastes, I will lead it up to the reader to figure out those costs as I need some sleep now.

In summary, we could crank out a lot of nuclear power plants for the cost of this war for oil. Food for thought.