In today’s online world, the battle to attract users continues to rage on, with app makers preferring either mobile or web apps. Desktop applications are becoming less and less relevant. Moreover, they also tend to be nothing more than rich clients to web apps—Electron being the popular platform of choice.
Does this mean we will soon abandon the desktop as a platform? No, of course not, I wouldn’t say that. Besides, while GUI apps seem to have been stalling recently, there is a segment of desktop apps that continues to grow.
Have you ever seen any movie featuring hackers? More often than not, these people are shown working in front of monitors displaying some sort of terminal (usually with a dark background and light foreground). This terminal, in turn, tends to be flooded with passing characters that apparently have some meaning to the person watching them.
Such representations of hackers in action are often mocked by professional developers, and there are even some programs that simulate various “hacker” effects, just for fun.
This article focuses on the practical side of using command line interface (CLI) tools. Knowing CLI commands and using quality tools can make you more productive and can also open doors to various approaches to automation that are far more practical with textual interfaces than with GUI apps.
You can get better at doing repetitive tasks in GUI, to the point that your multiple clicks are heard as a single long one. The problem is, this still won’t beat the efficiency of a specialized script. What’s more, performing the same operations manually introduces both an added cognitive load and the increased possibility of human error. As usual, we rely on computers to handle tasks humans may find boring, repetitive, or overwhelming.
It is worth knowing that a terminal tool can offer several types of interfaces. There are non-interactive ones like ls, which simply take the parameters and provides the output. There are interactive or semi-interactive interfaces most often found in package managers. (“Are you sure you want to proceed with the installation from unverified source?”) Then, there are textual user interfaces (TUIs), which are interactive GUI apps designed to fit the limitations of a terminal. Probably the most famous one is Midnight Commander (mc), a clone of extremely popular (in the 90s) Norton Commander.
If you want to become a console dweller, you need to equip yourself with a minimum set command line developer tools—-the bare essentials. Things you most definitely can’t live without are an interactive shell (aim for something modern with convenient tab-completion) and a text editor.
Now, I will mention the UNIX philosophy, which is often the foundation behind design decisions made by the tool’s authors, whether consciously or not. Some of the key points can be summed up as follows:
- Treat everything as a file.
- Do only one thing, but do it well.
- Read from standard input, write to standard output, and communicate errors to a standard error stream.
- When succeeded, return code 0. A non-zero value means an error (which can be specified by the exact return code).
- Allow for command chaining and scripting.
The first thing you see when opening a terminal is a shell. This is the part that makes the interaction between the user and the machine possible. It interprets your commands, splits them into program names and arguments, and executes all shell commands you throw at it.
Historically, there have been many different kinds of shells. Among the most popular ones were csh (C Shell) and various implementations of the Bourne Shell (usually known simply as sh). Bourne Shell got extended into Korn Shell which also gained some traction and is still being used by its enthusiasts. Csh is currently the default shell on some BSD systems, while almost all other UNIX-like operating systems prefer some kind of a Bourne Shell. Linux distributions tend to favor bash while Mac OS X comes with zsh as the default choice.
There are other possibilities out there, but they are far less popular, except Microsoft PowerShell on Windows systems. PowerShell is inspired in part by the interactive UNIX shells such as zsh and in part by the .NET runtime. Instead of treating everything as text, a concept common in UNIX world, it allows for object-oriented manipulation of data.
Even though Microsoft PowerShell is quite popular in the Windows realm, many programs with UNIX origins (most notable being Git, Autotools, or Make) tend to prefer some variation of Bourne Shell. Because of this, projects such as msys (bundled with Git for Windows), Cygwin, or Microsoft’s recent WSL were born. If you want a Linux-like feeling on Windows, MSys is the best choice here. If you want a full-featured Linux environment able to run standard Linux binaries, then WSL is the way to go. For something in between—UNIX API but compiled as a Windows executable (only use it when you actually know why you need this)—Cygwin is the answer.
Once you get acquainted with your shell, you will want to pick up some useful skills. As most of the coding work revolves around writing text (code, READMEs, commit messages), a good knowledge of interactive text editors is essential. There are many to choose from, and since an editor is one of the most necessary tools for any developer, there are probably just as many opinions on which editor is best.
The most popular text editors can be separated into two basic groups: Simple text editors and programmable text editors.
Both can be great for writing code, but, as the name suggests, the programmable ones offer the ability to shape and customize the editor to perfectly suit your needs. This comes at a price, though, as they also tend to have a steeper learning curve and may require more time to set up.
Basic Text Editors
Among the simple text editors, GNU Nano is the most widespread. Actually, it is a clone of the pico editor, so if one is not available on your system, you can try the other. Another, more modern, alternative to both is the micro editor. If you want something simple and extensible at the same time, this one is a good place to start.
Programmable Text Editors
Many developers rely on programmable editors from different camps, such as Vim and GNU Emacs. Both editors can run in the console or in GUI mode, and both had an impact on the key bindings found in other software. They both offer not only an API but also actual programming languages built-in. Emacs focuses on LISP and Vim uses its own VimL, but it also offers interfaces to other popular scripting languages (like Lua, Perl, Python, or Ruby). A more recent approach to Vim, called Neovim, is also worth mentioning, as it is starting to get a serious following.
It may be somewhat confusing, but there is also an editor called vi which is a predecessor of Vim (which, incidentally, stands for “Vi improved”). It is much simpler than Vim, but if you have enough confidence to write in Vim, it should not be a challenge to you if you find yourself needing to use vi.
Since pico/GNU Nano and vi/Vim are usually preinstalled on various systems, it is a good idea to at least grasp their basics (quitting Vim is a notoriously hard problem for beginners). This way, if you need to edit something on a remote machine, you will be ready regardless of what editor is already there. On your private device, feel free to use any editor you find the most comfortable.
Default System Editor
One last thing to note is that your system may have what is called a default editor.
$EDITOR environment variable points to the default editor and in Bourne-compatible shells (sh, bash, ksh, zsh) you can see it by entering
echo $EDITOR. If the value differs from your personal choice, you can set it yourself by adding
export EDITOR=my-awesome-editor to your shell’s runtime configuration (
~/.zshrc, and so on).
Other programs, such as version control systems and mail clients, will use this editor when they need longer text input.
As soon as you start doing serious stuff in CLI, you will encounter the limitation of being able to keep only one application open at any given time. When coding, you may want to edit the code, execute it, fix the mistakes, and execute again. When looking for a bug, you may want to list logs and see what gets logged when you send a request to the server. Typically, this would either mean switching between the two applications constantly or opening several terminal windows.
This is where a terminal multiplexer can help you. When speaking of multiplexers, some people immediately assume the topic to be of GNU Screen. It was the first widespread tool of its kind and is still very popular today (often being installed by default). Its modern replacement is tmux which, unsurprisingly, stands for “terminal multiplexer.”
These two allow you to have more than one window open in a given terminal session and switch between those sessions freely. They allow you to split the windows into panes, which helps running several applications at the same time and observing their output in real time (without switching any windows). Also, they work in a client-server mode, which means you can detach them at any given time and come back later to continue the work just where you left off. This last feature led to Screen’s popularity when people wanted persistent IRC sessions.
For most use cases, GNU Screen or tmux should be great for you, but if for some reason you would consider them to be too heavy on resources, there are also lighter alternatives. There’s dtach/atach and there’s abduco. They are limited in scope on purpose but can perform their respective duties well.
At this point, you may start thinking about getting all the aforementioned software installed on your machine. One problem is that each of the tools has different installation instructions. Sometimes, you need to download sources and compile them yourself, sometimes you get the self-contained binary, and sometimes you get what is called a binary package, which usually means an executable compressed together with some metadata.
To ease the process of installing software, operating systems creators came with a concept of package managers. Put simply, a package manager is like an app store for CLI and desktop apps. It precedes actual app stores by some decades. The problem is that almost every system has its own package manager. Debian, Ubuntu, and derived GNU/Linux distributions use APT, Red Hat-based distributions prefer yum or DNF, other Linux distros have more exotic means of installing software and so does different BSD clones. Besides built-in package managers, there are also user-installed ones like Chocolatey for MS Windows and Homebrew for Mac OS X/macOS. When you want to write instructions on how to install your program, you may end up writing cases for each of those systems. Seems like a bit too much, doesn’t it?
Fortunately the last of the mentioned systems, Homebrew, may be the most portable one, thanks to Linuxbrew, a port of Homebrew to GNU/Linux systems. The funny thing is, it even works on WSL if you want to have a similar user experience on Microsoft Windows. Keep in mind that WSL is not officially supported, though.
So, besides portability, what else can Homebrew offer? First of all, it does not interfere with the system packages, so everything you install resides on a separate layer to the operating system. Besides, no root permissions are usually needed to install packages. You can, therefore, have system packages which are stable and tested but at the same time check their newer versions without sacrificing the stability of the system.
If you wanted to test the editors, I mentioned earlier that all you need to do on a system with either Homebrew or Linuxbrew is to run this command:
brew install emacs micro nano vim neovim.
The Shiny Stuff
What we have already discussed is undoubtedly useful for work. But there are also applications that, while not necessary, still bring comfort to everyday life. You may not need them, but it is always worth to know them.
Searching the command history can be tedious. While both bash and zsh feature Ctrl+R keybinding, it only shows one substitution at a time. What is more, you need to enter the exact text that you used before. Since this is quite a common operation, once you start using the command line, it looks like a fine place for improvement.
Interactive filters, like fzy, percol, peco or fzf help you with filtering long lines of text. This can either be the aforementioned command history, all the lines of code in a project directory, or a list of filenames generated by
find .. The general idea here is to present you first with all the lines available and then rely on fuzzy finding algorithms to filter out everything that doesn’t match.
For example, binding Ctrl+R to fzf shows you a list of the most recent commands, which you can navigate up and down using arrows, or you can type
git to only show commands that feature Git somewhere inside. Personally, when I work with a shell that does not have an interactive filter, I feel suddenly a little bit lost. This feature is really compelling!
Plus, you can make your interactive filter available inside your programmable text editor. This way, you will have unified searching capabilities between your shell and your editor.
Facebook PathPicker was a great help when I was working mostly with C++ projects. The error log generated by the compiler can get pretty big and pretty nasty, and the ability to find the actual paths inside that log was a productivity boon.
In any given text file, or the content of your screen when used with tmux, fpp filters everything but the file paths. It then presents a UI where you can select one or more of those paths and run a command with them. The most common response would be to open the files in an editor, of course, which is the default action.
Chances are at least one of the projects you work on uses Git as a version control system. While being entirely powerful, the Git CLI is not the pinnacle of excellent user experience. To save you some stress reading through all the options in the Git help
$SUBCOMMAND, I recommend that you check out tig. It offers a nice console UI for the operations that benefit from it, like
Another tool that aims to help GIt users is fac, which is an acronym for Fix All Conflicts. As you might have guessed, it comes in handy when you run into conflicts while doing merges or rebases. It’s an alternative to other merge tools like vimdiff.
There was a time in the 90s when everybody wanted a two-pane file manager. The trend started with Norton Commander. Many others followed the same path, but the one that still sees a stable user base is Midnight Commander. The most obvious use case is using mc to manipulate local files, but it’s also very useful when working with remote machines.
Like most command-line programs, it’s very lightweight, so there is no problem running it over ssh and thanks to supporting FTP and FISH protocols, you can have a local file system visible in one pane and the remote one in the other—a convenient feature when you want to avoid typing or copying files names as arguments to scp.
“All work and no play makes Jack a dull boy,” they say. There are a lot of programs, command line and otherwise, that only serve your amusement. The Rogue video game falls into this category. It even gave name to the whole genre of games! Other popular toys are fortune and cowsay, which can make your day a bit less dull if you use them somewhere in your CI scripts, for example.
But for some of us, the main appeal of using a console in the first place is to feel like a hacker in the movies. No More Secrets and Hollywood Hacker represent this group well. Try it when somebody’s watching you work, and your hacker cred is certain to rise!
Command Line in Practice
So, what is so appealing about the command line that offsets the hours spent learning how to use the shell, the editor, and all the switches of various apps? The short answer is productivity, which comes from two things:
One is that when you are presented with only a terminal window and nothing more, you can focus more intensely, as there is not much to distract you. No notifications popping up, no ads, no pictures of pretty kittens. Just you and your goal.
The second thing is automation. You can put several frequently combined actions in a script and call it later as a whole instead of typing them all by hand each time. You can quickly get back to a particularly complex command you once wrote by searching through your shell’s history. Basically, you can record and replay anything, and the code is available as a documentation of what you did.
The ability to add aliases also contributes to the gains. For example, I find myself often crafting commits in Git by updating the same one until it’s perfect (for the moment). Once I stage the desired files, I run
git carmh. Don’t try to look it up in the manual, as it is my private alias meaning
commit --amend --reuse-message=HEAD. It saves some typing for sure.
Thing is, people get bored repeating the same actions over and over, and boredom reduces focus. This can lead to mistakes and errors. The only way to avoid them is not to interlace high-focus and low-focus actions. Writing code is high-focus and reviewing a commit message and contents is high-focus, but when you need to repeat several mechanical clicks here and there to get to the stage of commit review, chances are your focus is lowered. Command line isn’t, of course, free of such mechanical activities, but thanks to automation, you can avoid most of them.
You may already have been aware of some or all command line tools mentioned in this article. You may have learned something new and useful while reading it. If so, excellent—my aim here was not to offer a comprehensive overview and comparison of different tools, but to demonstrate a few crucial tools that I have found helpful in my daily work, in the hopes that you might find some of them useful, too.
There are far more interesting command line programs out there, and if you are interested in them, I recommend checking the Awesome Shell curated list of some of the best command line tools available today.
Most of the GUI apps have their terminal counterpart. That includes web browsers, email clients, chat clients (IRC, Slack, XMPP), PIM suites, or spreadsheets. If you know of any good programs that I haven’t mentioned, please bring them up in comments.
Understanding the Basics
What is a command line argument?
Command line arguments are used to pass the information to the programs. For example, “cat /tmp/file” means “run command ‘cat’ with ‘/tmp/file’ as an argument.” It instructs the “cat” command to open the passed files.
What is the Command Line Interface?
The Command Line Interface (CLI) is a means of interacting with programs lacking a GUI. It can be interactive, but it can also handle just the arguments and return responses.
What is the difference between CUI and GUI?
A graphical user interface (GUI) uses elements such as windows and icons to interact with an application (like a web browser). The command-line user interface (CUI) tries to mimic this behavior in a terminal using only printable characters.
How to use the command line (on different systems)
To use command line, you need to open a terminal emulator, which usually have names containing the word “Terminal” or “cmd” in it. For remote systems, this would be Putty or “ssh.”