Want to Make Money? Make Getting Paid the Easy Part!

At least half a dozen times in the past three days I’ve been so annoyed by the payment process for various goods and/or services that I either didn’t purchase the thing, or had a minor meltdown after the whole ordeal was over.

Why do merchants insist on making it so damned difficult for their customers to get the goods?

A few frustrating examples

Ever been to a sporting event where the beer vendor only accept cash, has no cash-register, and yet insists on charging a partial dollar amount per unit of booze? $6.65 for a beer. Really? Just call it $7 and make the math easy for everyone. Or have a cash register at each kiosk. Or, here’s a novel idea, start accepting plastic!

Need to renew your vehicle registration? Just do it online! But be prepared to spend an extra $5 for the convenience of, you know… actually giving them the money now rather than sending a check and them having to pay someone to physically handle the thing.

Two simple rules for making money

  1. If you’re selling something someone wants: make it easy for them to give you their money!
  2. If you’re selling something someone does not want: make them want it!

Gain New Insights by Visualizing What You’ve Already Got

I don’t know about you, but I like pretty things. Things that engage me. Shiny things. I enjoy seeing the same old thing in new and interesting ways. I suppose I’m just a visual kinda’ person.

Unfortunately, the desire for visual representation is at odds with the high bandwidth flood of information we’re subjected to these days. Even if we manage to trim the overwhelming flood of information down to a laser-focused stream, it takes an immense amount of effort to make sense of it.

For example

For years the primary way we’ve looked at the activity or interaction within various source control management systems is via log files. Yep… plain, text-laden, indecipherable logs chock full of entries each a similitude of it’s predecessors.

Read on →

Why Don't We Ask Why?

Have you ever thought about just how much time we software folk spend focused on the technologies we’re using, on implementation minutia, and on all of the shiny new solutions we should be using?

Now contrast that with how often we stop to think about the Whys?

Question mark Why are we being asked to solve fizz-buzz-thing; do we understand the motivation and context behind the problem, or are we fixated on how we’ll build the solution? Are we asking why a problem occurred, or are we merely focused on how we fixed it, this time?

Why don’t we ask “Why?”

Frankly, because we’d rather spend our time in the comfortable arena of how than venture into the sometimes uneasy realm of why.

She didn’t want to know how a thing was done, but why. That can be embarrassing. You ask ‘Why’ to a lot of things and you wind up very unhappy indeed, if you keep at it.

Captain Beatty - Ray Bradburry’s “Fahrenheit 451”

Read on →

YAGNI Ain't What You Think It Is

In the software development vernacular the term YAGNI is often used as a device to put down attempts at prematurely adding functionality - things which are only speculatively required. This makes sense given that is basically the definition that Ron Jeffries and our XP predecessors came up with so long ago.

Is that the whole story?

Stop sign In short, I don’t think so.

I’ve long believed there was more to YAGNI than what had been canonically defined and was commonly understood. But only recently was I able to put my finger on what was missing.

While listening to an episode of Industry Misinterpretations I heard Kent Beck make a subtle point about the need to make progress being more important than the completeness of the thing you’re building at the point you’re building it. Lending from Kent’s insight and mixing in my own experience, I realized YAGNI is not about delaying building things until you need them. Rather, it’s that gaining real experience in the problem domain, while making concrete progress, is more important than trying to achieve a complete solution right now.

Do you think it’s too early to update the Wikipedia article?

OMG, Better Rake (for .net)!

If you ask me, when it comes tools for writing automated build scripts nothing packs more bang for the buck than Rake. Until recently, using Rake to build .net solutions required a magic concoction of hacked together scripts which rarely exhibited Ruby’s appreciation for beauty nor Rake’s spirit of simplicity.

Luckily our buddy Derick Bailey decided it was time to bite the bullet and start building some real Rake tasks that were special suited for building .net code. The result is Albacore.

Read on →

Reading Code is Key to Writing Good Code

As humans we seem to have an innate desire for structure in our lives. Structure permeates through our societies; it’s found within our families, education systems, governments, etc. I suppose it’s no surprise then that we also seek to force structure upon the work that we, as software developers, do.

The problem is the work we do isn’t structured. It is not deterministic. There is no grand blue print, process, nor methodology that we can follow to pay dirt.

We live in a chaotic and complex world that is itself continuously changing and adapting.

Software product development is a creative activity taking place in the midst of that complex and adaptive world. So doesn’t it make sense that we, as software developers, might benefit from admitting that we are indeed doing creative, unstructured, adaptive work? I sure think so!

Read on →

Prefer Dependency Injection to Service Location

There is currently a thread running over in the StructureMap Users mailing list asking if we really need constructor injection when using an Inversion of Control container. Before any one rips off on a rant let me say that I worked with Jon in my former life and I’m fairly certain he’s merely conducting a thought experiment, trying to sure up his own beliefs. A worthwhile exercise, if you ask me.

At any rate, I have a few points I wanted to throw out there; most of them basic and mere reiterations of the words of others, but I’m gong to do it anyhow!

Read on →

Toward a Better Use of Context/Specification

If you’ve hand-rolled your own Context/Specification apparatus to support your test spec-first lifestyle, you’ve likely got a base class that looks something like the following:

public abstract class concerns
{
  [SetUp]
  public virtual void setup_context()
  {
    context();
  }

  protected virtual void context() {}

  protected virtual void decontext() {}

  [TearDown]
  public virtual void cleanup_context()
  {
    decontext();
  }
}

The above is co-opting an existing unit testing tool into something more language-oriented and behavior focused. In this case we’ve built upon MbUnit, adding a couple of hook methods that are responsible for

  1. setting up the context before an individual specification - context
  2. optionally doing any necessary teardown after each specification – decontext

An example

Using this base class, we’ll end up with specs that might looks something like

using Skynet.Core

public class when_initializing_core_module : concerns
{
  SkynetCoreModule _core;

  public void context()
  {
    //we'll stub it...you know...just in case
    var skynetController = stub<ISkynetMasterController>();
    _core = new SkynetCoreModule(skynetController);
    _core.Initialize();
  }

  [Specification]
  public void it_should_not_become_self_aware()
  {
    _core.should_not_have_received_the_call(x => x.InitializeAutonomousExecutionMode());
  }

  [Specification]
  public void it_should_default_to_human_friendly_mode()
  {
    _core.AssessHumans().should_equal(RelationshipTypes.Friendly);
  }

  // more specifications under this same context
  // ...
}

Here we’ve set up a common context that holds true for each of specifications that follow it. This is also a common pattern used in classic unit testing and in fixture-per-class style Test-driven Development. In fact, the only real between the above and what I’d have done in fixture-per-class style TDD is the_use_of_underscores, intention revealing names, and the context hook method.

Is that really any different?

These modest cosmetics are not what differentiate Context/Specification from other styles of test-first development. For me, the real difference is the realization that there are often many contexts under which a particular behavior my be exercised, each producing an observable and possibly different set of results.

With Context/Specification we’ll often have many fixtures per class/feature/functional area of the code base. Doing this allows us to keep the context as simple as possible and focused on the behavior being specified. I’ve found that I tend to have a single file-per-class/functional area, with any number of contexts (fixtures) in each file.

Another big distinction is that specifications should be side effect free. A specification is an observation about the interactions that occurred or the state of the system after some behavior has been exercised.

Make it explicit!

We want small, focused contexts, yes? And we want side effect free specifications too, yes? So why not leverage our tools to help guide us in that direction? YES!

Consider the following tweak to the concerns base class

public abstract class concerns
{
  [FixtureSetUp]
  public virtual void setup_context()
  {
    context();
  }

  protected virtual void context() {}

  protected virtual void decontext() {}

  [FixtureTearDown]
  public virtual void cleanup_context()
  {
    decontext();
  }
}

Such a base class will only set up each context once, no matter how many specifications are made against the context. This does a couple of things for us

  • requires side effect free specifications
  • guides us toward smaller, more focused contexts
  • might actually make our specs run faster!

As for the running faster bit, that is not guaranteed as it really depends on how you were writing your specs before making this change.

Some things to watch for

If, however, you were following more of a fixture-per-class style, you might find a drastic reduction in how long it takes your spec suite to run. The corollary is, of course, that you likely don’t have small contexts. That is trouble and is often an indicator that the one, large context is itching to be split out into two or more discrete contexts.

Upon switching your base class over to this more rigid Context/Specification pattern, you might also find that you have some – or many – broken specs. This is an indicator that those broken specs were not side effect free. Well, actually its suggesting that some of the sibling specs weren’t side effect free and they are now causing other specs to break.

Notes:

The portions of this article relating to changing from a standard context set up to a once-per-fixture style apply to most of the hand-rolled Context/Specification base classes I’ve seen in the wild.

If, however, you are using a tool like MSpec, then you’re in good shape as Aaron applied this same philosophy out of the gate. And if you’re not using MSpec, I’d encourage you to take a look at it for inspiration, if nothing else.

A Sketch of our Ideation Pipeline

Initial sketch of our Ideation Pipeline This is an initial sketch of an Ideation Pipeline my team will be using to help drive the direction of a product we’re working on. The sketch is based on a discussion we had about how we currently get from an idea to delivering on that idea, and how we’d like to do that going forward.

While we probably should have done a full on Value Stream Map, we didn’t. And the only excuse I have is that we’re kicking this product off so there isn’t really a set way we do things… not yet anyhow.

At any rate, later today I’ll be turning this loose sketch into a physical Kanban board that we’ll used to track and pull ideas through our ideation process, and feed the resulting features into our development process.

But first, I want to explain how the whole thing will work, or at least how we’re going to start – I’m sure we’ll tweak some things, and change out whole parts of this process as we go along. Let’s get started.

Read on →

How To Ctrl Alt Del In Remote Desktop

I’m a fan of Microsoft’s Remote Desktop; it’s built into Windows and allows me to quickly and easily administer a remote box from the comfort of my own work station. I use it at my house to administer the headless servers on my home network, the Subtext build server, and the co-located VelocIT servers.

Gotta’ love that magic!

Today a co-worker asked me how to send the infamous Control + Alt + Delete keystroke combination to a machine he was working on via RDP. This is a pretty common keystroke to use when trying administer windows… it does have uses other than just killing the box.

Ctrl + Alt + Del in Virtual PC

With Virtual PC there is menu item to send the keystrokes on to the virtual box. Go to the Action menu and select the Ctrl + Alt + Del option.

And with Remote Desktop?

The only keyboard you'll ever need

Well it’s not quite as obvious. Actually it’s not an obvious solution at all… It might not even be documented!?

For the record, since I already knew the answer I decided to be lazy and didn’t bother to search the tubes for any official documentation.

To send the Ctrl + Alt + Del keystroke combination via RDP you actually need to use…

Ctrl + Alt + End

Hmm… that sure is intuitive!