I’ve noticed that my software development proclivities tend to follow this pattern:
Conceive of a problem in need of solving. This may involve realizing I need something, and scratching that itch, or it may involve having someone else tell me “This is what I need,” and adopting that as a goal.
Write a basic, functional command line utility with a very minimal interface that serves that need. This usually involves having an option parsing interface that is extremely brittle, but everything works exactly as intended as long as you give it only expected inputs.
Flesh out the interface and built-in usage documentation. Basic argument array parsing gets replaced with an argument processing library, a
--helpoption is implemented and fed a carefully written set of concise, complete documentation.
Code gets tightened up. Additional features get added,
--helpdocumentation gets updated, code is refactored a bit at a time, and so on.
If I decide it needs it, I start working on an alternate interface or two. This basically ends up being a wrapper script that calls the command line utility — which means (of course) that a GUI is always just a thin veil over a command line utility or two.
This process tends to work well for me so far, especially working with a language like Ruby. It provides a few benefits for the final product(s), at least some of which are quite in line with the advice in TAOUP. See TAOUP for details.
This seems to be a particularly rare style of software development, from what I’ve seen. I mean, sure, there are a bunch of GUI applications out there that are basically just wrappers for command line utilities, but they seem to mostly be the result of someone tying together the functionality of several separate command line tools written by other people, whereas even when my ultimate intent is to have a GUI, I seem compelled to write a CLI utility first and attach the GUI as a separate program that calls the utility for the program’s core functionality.