Thursday, November 16, 2006

Video Conferencing on a budget.

Holding our monthly company wide meetings have always been problematic, one problem is half the team is in India and the other half in Milwaukee. Other problems include 512kbps bandwidth for the team in India, also the lack of will to spend $10k (per location!) on a Polycom solution.

Having two groups of people in their respective conference rooms and trying to hold a discussion has proven to be very different than sitting in front of your computer, wearing a headset and chatting over Skype. The microphone has problems picking up individuals in the room, the use of external speakers causes horrendous feedback, and the low video resolution that didn't matter before, matters now because we can't make out individuals in the room.

So I was appointed to do a little research into solving this issue. Had I been given a budget of $20,000 it would of been quite a simple job, buy a couple of Polycom systems and be done with it, however my budget was in the 'something reasonable' range.

I decided to try out iChat on the Mac, mainly because I have family in Scotland and Australia and have a lot of experience with using it for video conferencing, and swear by it. The biggest problem with iChat would be firewall configuration, but since we have an inter-office VPN, it's not a problem at all. To make this choice even easier, we already had a Mac Mini in the office in India.

Setting up a test was simply getting Geoff to bring in his MacBook Pro to our next company meeting. Setup was quite painless, we did get a 'insufficient bandwidth' error, but solved that by dialing down the speed from 'unlimited', and then it was up and running.
The results were very positive, the video was by far the best quality we have ever had, nice high resolution, we could see everyone in the room very clearly. The audio was more of a mixed bag, on the up side the feedback/echo was very slight, but on the downside the mic had trouble picking people up in the room and speakers were quiet even with the volume maxed out to ten, I think it's the first time I've really wished that we could turn speakers up to eleven.

Since then I have researched external speaker/microphones and was able to locate a higher end combo made by ClearOne, called the Chat 50, its quite pricey at $130, but if it solves our sound problems it will be worth every penny.

Since the test went so well we will be purchasing a Mac Mini and iSight for the Milwaukee office, and also a Chat 50 to try out.

All in all the total cost will be about $900, with maybe some potential cost savings by getting a refurbished Mac Mini.

Definitely something reasonable.

Monday, August 28, 2006

Differences between meta-programming and intentional programming

In an effort to post something to this blog, here are some current thoughts:


After some reflection I have decided that meta-programming and intentional programming are two very different things.

Meta-programming is just about using/creating custom programming languages. The only intent meta-programming provides is just that which occurs due to the domain specificity of the languages in use. The actual code generated is still very mundane. Nothing special here.

Intentional programming to me means doing much more than just this. To me it is about leveraging the intent as much as is possible. Part of getting the intent is raising the level, and meta-programming is a great way to do this. But intentional programming can be some much more than writing software at different levels.

If intent has been encoded in such a way that we can leverage it, then the possibilities are huge.
For Example:
When about to perform a lower level db query we can, at this point in time, look and see what the higher level intent(s) is. Is the intent to just display this data on the UI? Is the intent to do X with the data?

We seem to have got into this rut of having very low expectations of what we can do, creating static code that's very dumb that doesn't know much. And them doing a huge amount of work to make up for all these deficiencies.

When we (the developer) write the code we are taking into consideration the intent and the implementation of many many layers. But what we create is this code that doesn't know the big picture, and couldn't but help be brittle and prone to errors. Heck a person given three parameters and an algorithm in isolation couldn't do much better either.

Saturday, June 03, 2006

IMPLEMENTATION MATTERS contiuned

I was just working with java collections the other day and it got me thinking....

From the JavaDoc:
ArrayList
The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking).

LinkedList
All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the begining or the end, whichever is closer to the specified index.



This speaks to my previous comment "Interacting components will not only be able to interface with each other through a well defined contract but also have a conversation about each others implementation details."
Why do we determine what data structures and implementations to use at design time? For a dynamic system this is not really a decision you can make at compile time, becuase (as you can see above) so much of the performance depends on the actual implementation.
Using metadata and metaprogramming we could provide this information along with our module/component/call-it-what-you-will and then the system as a whole can make a call as to what is the best way to format the data for maximum performance.

Same for my games programming / graphics card example, the graphics card can let the calling program know what types of data result in what types of performance, then the system as a whole can determine which way to format which data and even decide at runtime which underlying implementations to use (ArrayList or LinkedList).

Saturday, May 27, 2006

IMPLEMENTATION MATTERS

Conventional programming techniques tell us when creating a large software system that we should isolate components from each other using the ‘black box’ technique.

This idea behind this is simple, “why expose all the complexity of the system to third parties?”, instead you can define an abstraction for the complex system and enable interaction through this. Utilizing the abstraction provides us with many advantages; the most relevant advantage is the ability to interact with this component without understanding everything about it.

But the technique of abstracting out a complex system generates problems of it own; problems I believe cripple our ability to move (the engineering of) software forward. The following are some of my thoughts (ramblings) on this:

MONOLITHIC SOFTWARE

We cannot engineer software that doesn’t depend on implementation.

Even though we create systems that may have hundreds or thousands of interactions through approximate interfaces, this software still only works when treated as a single piece of unchanging code.

We can only engineer software we are confident works if at some point of time we lock down all codebase and make it monolithic.

While some systems (like Firefox or Eclipse) do support plug-ins at runtime, these changes are cosmetic at best, these components typically don’t have other (unexpected) components depending on them, these are simply plug-ins at the extremities of the dependency graph.

IMPLICIT CONTRACT

When we work with interfaces, we are really working with the implementation.

The myth is that if we define an interface completely enough, then the implementation doesn’t matter.

However during the development and testing cycles we are testing against an implementation of the contract, and by doing this we really no longer working with a black box system we are working with an implementation, so we are working with a contract implicitly derived from the implementation. This is sometimes referred to as a leaky abstraction.


COMPLETE CONTRACT

There is a way to define a contract so completely that its implementation will be unambiguous.

The only time you can successfully interact with a system without caring what the implementation of a black box, is if the contract is defined so completely that there is no ambiguity in the implementation. The good news is that there is such a contract! The bad news it that this contract is the implementation itself!

Even a logically equivalent implementation would not be sufficient, i.e. code that for every possible combination of inputs would give the same outputs as a different implementation. Since there could be different internal failure points and dependencies.

MINOR FLAWS MILTIPLED

Inconsistencies in implementations create flaws, which become magnified in dynamic systems.

Currently we have no way to create software that can be truly assembled at runtime with differing implementations, because it would just not work.

Imagine if using today’s technology we defined a contract for 100 different components, and all these components interact and leverage each other. We then give these 100 different components to 2 different groups of people to implement, these implementations fulfill the contract.

If we then try to run the system and for each component randomly choose which of the two implementations to use for each component, this system would never work.

SPECIFICATIONS

Speicifcations are just a contract.

Specifications are really another face to the same problem, just another way to try to define a contact without defining the implementation.

A great example of this is the attempt by Sun Microsystems to try to define the J2EE spec to make EJBs vendor neutral. The idea was that you could build an EJB for IBM’s WebSphere Application Server and then you could then deploy it on BEA’s Weblogic Server and it ‘would just work’.

In practice this was simply just not the case, even the hugely increased detail in the subsequent EJB specifications has not made them portable.

INTENTIONAL PROGRAMMING

DSLs have implementations too.

The current metaprogramming technologies also have this problem. A metaprogram is eventually implemented, the DSL itself can and will be implemented in an ambiguous way as well.

However there is good news, capturing intent on many different levels (layers) gives us the potential to have so much more information to work with.

IMPLEMENTATION MATTERS

And it’s ok! We should stop trying to pretend that everything works the same.

Thanks to all this extra information we now have a broader semantic description of a programmer’s intent.

Once we have entire systems written this way, we will then be able to do so much more.

Interacting components will not only be able to interface with each other through a well defined contract but also have a conversation about each others implementation details.

If we think of the optimizations that a game programmer typically does to get high performance rendering, even though the programmer is interacting through a generic interface (like DirectX) she is only able to truly achieve great performance if she knows a lot about the implementation of the graphics card behind the generic API.

While critical to high performance these optimizations are typically not some great insight they are just the application of gained knowledge. There is nothing here that couldn’t be automated, if the graphics card could communicate to the calling program the most performant way to structure its graphics data (texture size, byte alignment, etc…) then this system could be as fast as one coded by hand.

FUTURE

This is what I see in the future thanks to metaprogramming technologies like intentional programming, this will be one of the first times where a new technology will actually be able to run faster that the previous one. Think of a complex system which runs as fast as if it had been completely coded by hand at every level. No only this but also a system that would truly be dynamic, since it would be able to change implementations behind abstractions without the loss of stability or performance.

Friday, April 07, 2006

RAD using 1 GB of RAM

I used to think have a couple of gigs of ram was enough. Then I met my new friend Rational Application Developer!