jeblad

ZFS and backup

John Erling Blad

04/17/21 04/17/21 04/24/21

This post is a response to discussions about how I prefer to set up storage space on my development computers, and whether that could be of interest to other users. I prefer to be able to rollback and redo my work in case of problems, which is somewhat tainted of my own experiences. Other users may view things differently.

training with autoencoders

John Erling Blad

04/18/20 04/19/20 04/25/20

During a hackaton I got a problem where we had very little labeled data (almost nothing at all) and no easy way to get more. I proposed a solution with an autoencoder, it is pretty stright-forward, but later I started wondering why so few uses this solution instead of trying to wrangle more labeled data (which usually are quite expensive). I believe it partly has something to do with the idea that real data do something magical with the input data, and partly that most users overfocus on what they want to get out of the network.

Strong EPSP

John Erling Blad

01/25/20 02/18/20

Some time ago I read a story on how an excitatory synapse could fire a much stronger excitatory postsynaptic potential (EPSP) if the local dendrite already was sufficiently excited. The response in this case was of the order 200 times stronger. This radically changes the behavior of the neuron, it does not anymore have an unimodal response, it has a multimodal response.

Git as backup

John Erling Blad

12/04/19 12/04/19

Some years ago I got a really weird phone call. I had worked on a project several years earlier, it was a project where code should not be taken outside the property, and they had lost their code. The call was rather frankly; did I take a copy outside the property, because (ehm) they had lost the code. Of course I did not have a copy, or rather I did not know about a copy at that time. How can you create a secure backup in a git development environment without to much hassle, and is it possible at all?

AI vs ML vs robotics

John Erling Blad

12/01/19 12/03/19

The same claim about artificial intelligence (AI) seems to emerge time and again; we have this system, it has some kind smartness built in, and surely it must be an AI-system. It seems like this misconception is partly from an idea that all machines with smartness must be an AI-system, and partly that sales departments wants to boost the idea that some product is especially smart so they call it an AI-system. It may not be far from the most common explanation of “AI”, but it is actually quite far from real “AI”.