How to Recover a List of Files with Git

This is a piece of information I am posting for my own convenience, but that may be of use to other people using Git.

I use git to keep my local files. Not only programming files, but also all types of information, including blog entries and other miscellaneous data.

Sometimes I delete or move a whole directory, and wish to recover it later.

The recipe is simple, but easy to forget because I rarely use it:

git ls-files -d | xargs git checkout --

To explain this, notice that git ls-files -d just list deleted files. Then, xargs will feed these names to git checkout --, which is responsible for restoring the file.

Accessing MySql From SBCL

This weekend I spent some time to test clsql on sbcl. I tried to do this before on Windows, but I couldn’t get it to work. So I used my trusted Amazon ec2 linux machine to do the job.

Initially, I had to use yum to install sbcl. It is as simple as

yum install sbcl

and watch the packages being downloaded. I also needed to install gcc and mysql-devel (this depends on the distribution).

yum install gcc
yum install mysql-devel

Then, I used asdf to install the required packages, including clsql. These are some of the commands I used:

(require :asdf)
(require :asdf-install)
(asdf-install:install :uffi)
(asdf-install:install :clsql)

There was a lot of complaining from asdf-install, especially related to a missing gpg library. Also, most backends other than mysql failed to compile, which is ok, and I just ignored it selecting one of the options given by the debugger.

Once installed, you can use clsql to make calls to mysql. I created a simple database called carlosdb with the commands (typed on mysql prompt):

create database carlosdb;
use carlosdb;
create table users (
     `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
     `name` VARCHAR(100) NOT NULL);

Then, we can use clsql in the following way:

(require :asdf)
(asdf:operate 'asdf:load-op 'clsql)
(use-package :clsql-user)
(connect `("" "carlosdb" "" "") :database-type :mysql)
(execute-command "use carlosdb");
(query "select * from users")

The two results of the last operation are lists like the following:

((1 "john nash") (2 "james jim") (3 "jon williams"))
("id" "name")

where the first list is composed of sublists, each have as elements the contents of a record. The second list just stores the names of the fields (id and name, in this case).


Some Helpful Resources

Making the Compiler Work Harder for You

In the last post about learning to use the debugger, I mentioned a few tricks that you can perform with a good debugger, like for example changing the values at certain memory locations. These are tricks that can speed up your debugging process, because you don’t need to recompile lots of code and restart your program just to test a small change.

Another related technique consists of using the compiler itself to perform some check while you make changes to the code

An example of how this works is presented in a post by Hovik Melikyan. He discusses what he calls brute force programming, where the idea is to use the compiler to perform temporary checking.

For example, one of the techniques is to use the compiler to figure out exactly where a variable might be used. The simple thing to do in this case is just to rename the definition of the variable temporarily. During compilation, you will see all locations where the variable was used and fix them accordingly.

A related technique that I use in C++ can be used to show where a type was defined. If you want to find the definition of a type (usually from a library, such as MFC, for example), what you can do is just writing a new typedef with the type you are searching for. If the type is, say, Widget, you would then write

typedef int Widget;

If you do this, the compiler will immediately give you an error saying something like: “type Widget was previously define on file X, line Y.” So, now you know where the type was originally defined.

Is there any other trick that you use to make your compiler work for you? Let me know.