So this is where I start getting chills about the language and the engineering behind it. Performance of your Go program is going to come from three places. We can say that it can come from algorithm efficiencies, right? If we can do something in 10 instructions instead of 20, it’s going to be faster. But the hardware is so fast today that you can have levels of inefficiency in your algorithm and still get enough performance.

Two – garbage collection. We’ve got a low latency garbage collector, and we’re talking about it running under a hundred microseconds… The reality is that if we’re allocating values to the heap that really shouldn’t’ be there, there’s going to be a performance cost to that. Again, we always want to talk about performance as, “Is it fast enough?”

Third – Damian did an amazing talk two years ago at dotGo – it’s about how efficiently we can get data into the processor today, and the mechanical sympathies about that.

And if we break these three things down, Go is really pushing us in the right direction for all three if we take some to just look at what the language is providing and then again start looking at some basic idioms and patterns.

[00:36:11.16] So if we talk about what Damian is saying today, that performance today comes from how efficient we can get data into the processor; he’s talking about a caching system. If you’ve noticed, Go basically has three data structures: arrays, slices (which is an array underneath; it’s a vector) and maps. And the key to these three data structures – and maps are doing this underneath – is that it lays data out in a contiguous block of memory, and this is going to create predictable access patterns, and this is going to do these things.

So the funny thing is, as a Go developer coming in, the compiler, the language, and the runtime are taking care of all of these for you without the need of a big, heavy virtual machine. Just by using a slice to store your data, you’re already doing the best that you can.

When it comes to, again, the garbage collector, if we allow some basic idioms around using value semantics for our reference types, then the majority of the values you’re creating will stay on your stack. They will not allocate. You didn’t even have to necessarily understand that all of these is happening, and this to me is where I get super-excited, because you can come to this language, follow some basic principles and design and idioms, and you’re doing all of these things correct when the machine is your model. And over time, as you learn more, and more, and more, you start to understand why you’re doing this, and that’s what I try to do in my class – get you to appreciate why Go has slices, and arrays, and maps, and so you want to use it for yourself and you understand the brilliance behind that.

We teach escape analysis only so you can appreciate the value in the pointer semantics that the language gives you and effectively use them, and understand that the machine is so fast today that you can have some levels of inefficiency, which to me means this – I don’t have to write clever code in terms of making that algorithm so efficient. If it’s going to be a little less efficient but more readable, it’s probably not going to be your bottleneck in terms of performance.

So let’s focus on integrity, readability, and simplicity first. Let’s do those readability code reviews first, and then what’s brilliant about Go is that you don’t have to guess about performance. We’ve got a tremendous amount of tooling that will tell you what isn’t performing well and give you really clear understandings and indications of where and how to fix it. So we don’t have to worry about necessarily the performance impact on every line of code you’re writing at the time you’re writing it if you focus on readability, because you’re going to get the bulk of it anyway.

Source link


Please enter your comment!
Please enter your name here