These AI written articles carry all the features and appearance of a well reasoned, logical article. But if you actually pause to think through what they're saying the conclusions make no sense.
In this case no, it's not the case that go can't add a "try" keyword because its errors are unstructured and contain arbitrary strings. That's how Python works already. Go hasn't added try because they want to force errors to be handled explicitly and locally.
It is simpler than that. Go hasn't added "try" because, much like generics for a long time, nobody has figured out how to do it sensibly yet. Every proposal, of which there have been many, have all had gaping holes. Some of the proposals have gotten as far as being implemented in a trying-it-out capacity, but even they fell down to scrutiny once people started trying to use it in the real world.
Once someone figures it out, they will come. The Go team has expressed wanting it.
The mentioned in the article `try` syntax doesn't actually make things less explicit in terms of error handling. Zig has `try` and the error handling is still very much explicit. Rust has `?`, same story.
I just read the article and I didn't get away with that rationale. Now, this isn't to say that I agree with the author. I don't see why go would *have* to add typed error sets to have a try keyword.
Yes, mimicking Zig's error handling mechanics in go is very much impossible at this point, but I don't see why we can't have a flavor of said mechanics.
I think the argument is that the compiler does not enforce that the error must be checked. It's just a convention. Because you know Go, you know it's convention for the second return value to be an error. But if you don't know Go, it's just an underscore.
In a language like Rust, if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.). It literally won't compile.
One big difference is that with unwrap in Rust, if there is an error, your program will panic. Whereas in Go if you use the data without checking the err, your program will miss the error and will use garbage data. Fail fast vs fail silently.
But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.
> if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.)
I think you can make the same argument here - rust provides unwrap and if you don’t know go, that’s just how you get the value out of the Result Type.
Go has tools for checking things like this. It's just not in the compiler. If you don't want to enforce that all errors are checked, go doesn't force you to. If you do, it requires you to run an extra tool in your build process.
(Or in your commit hook. If you want to develop without worrying about such things, and then clean it up before checkin, that's a development approach that go is perfectly fine with.)
Criticisms of Go seem like they pivot on the author's understanding of what's being done with `_` and sometimes `nil`. It's a strongly-typed language with a lot of flexibility around type, and that's nice to have when working on edge systems like a data ingester.
My takeaway is that Go almost always prefers simplicity and not so much good software engineering. `nil` without compiler checks is another example, or designing a new language without generics. However the overall simplicity has its own value.
I agree, its strength (beyond goroutines) is that anyone who knows one of the popular languages (Python, Java, etc) can easily translate their idioms and data structures to Go, and the code would remain easy to read even without much Go experience. That's probably one reason why the TypeScript compiler team chose Go.
But this makes the language feel like Python, in some ways. Besides nil, the lack of expressivity in its expressions makes it more idiomatic to write things imperatively with for loops and appending to slices instead of mapping over the slice. Its structurally typed interfaces feel more like an explicit form of duck typing.
From what I remember of a presentation they had on how and why the made Go, this is no coincidence. They had a lot of Python glue code at Google, but had issues running it in production due to mismatched library dependencies, typing bugs, etc. So they made Go to be easy to adopt their Python code to (and especially get the people writing that code to switch), while addressing the specific production issues they faced.
> he popular languages (Python, Java, etc) can easily translate their idioms and data structures to Go, and the code would remain easy to read even without much Go experience
disagree, they made many decisions which are different from mainstream: OOP, syntax as examples.
Sure, the syntax is unique, but it's fairly easy to get over that. I guess I'm comparing to Rust, where not only is syntax different, but data structures like a tree with parent references aren't as straightforward (nor idiomatic), and there's a lot more explicit methods that requires knowing which are important and which are just noise (e.g. unwrap, as_ref).
I would argue that after a short tutorial on basic syntax, it's easier for a Python/JavaScript programmer to understand Go code than Rust.
to me Rust syntax is less alienating, they adapted ML syntax which is probably second most popular(scala, typescript, kotlin) after C style syntax, while Go from whatever reasons got something totally new.
For some uses, that's all you need, and having more features often detract from your experience. But I'm doubtful on how often exactly, I have been able to carve out a simple sub-language that is easier to use than go from every stack that I've tried.
I think of it as a bit like Python with stronger types.
I'm not convinced that you couldn't have good software engineering support and simplicity in the same language. But for a variety of mostly non-technical reasons, no mainstream language provides that combination, forcing developers to make the tradeoff that they perceive as suiting them best.
The Go team has discussed syntactic sugar for error handling many times. They're not against it, they're just holding out for a proposal that checks a lot of boxes and makes everyone happy, which hasn't happened yet.
I they wanted error handling they would have thought for a bit then picked a good enough solution. "We only want 'X but perfect'" is the same as "we don't want X".
The author doesn't touch on it, but the bigger problem with things like Foo|Bar as an actual type (rather than as a type constraint) is that every type must have a default/zero value in Go. This has proven to be a difficult problem for all of the proposals around adding sum types, whether they're used as errors or otherwise. For example, to support interface{Foo|Bar} as a reified type, you'd have to tolerate the nil case, which means you either couldn't call any methods even if they're common to Foo and Bar, or else the compiler would have to synthesize some implementation for the nil case, which would probably have to just panic anyway. And an exhaustive type-switch would always have to have "case nil:" (or "default:") and so on.
Hot take, maybe, but this is one of the few "mistakes" I see with Go. It makes adding QoL things like you mentioned difficult, requires shoehorning pointers to allow for an unset condition, some types don't have a safe default/zero value like maps, and makes comparisons (especially generic) overly complex.
Go specifically does not want to add QoL things because it means the compiler team has to spend time implementing that extra syntax and semantics versus making a minimal set of features better.
The problem with the zero value business is that it also makes adding these QoL things in libraries difficult or outright impossible. Case in point, I tried building a library for refinement types, so you can have a newtype like,
and that enforces an invariant through the type system. In this case, any instance of type AccountName needs to hold a string conforming to a certain regular expression. (Another classical example would be "type DiceRoll int" that is restricted to values 1..6.)
But then you run into the problem with the zero value, where the language allows you to say
var name AccountName // initialized to zero value, i.e. empty string
and now you have an illegal instance floating around (assuming for the sake of argument that the empty string is not a legal account name). You can only really guard against that at runtime, by panic()ing on access to a zero-valued AccountName. Arguably, this could be guarded against with test coverage, but the more insidious variant is
type AccountInfo struct {
ID int64 `json:"id"`
Name AccountName `json:"name"`
}
When you json.Unmarshal() into that, and the payload does not contain any mention of the "name" field, then AccountName is zero-valued and does not have any chance of noticing. The only at least somewhat feasible solution that I could see was to have a library function that goes over freshly unmarshaled payloads and looks for any zero-valued instances of any refined.Scalar type. But that gets ugly real quick [1], and once again, it requires the developer to remember to do this.
So yeah, I do agree that zero values are one of the language's biggest mistakes. But I also agree that this is easier to see with 20 years of hindsight and progress in what is considered mainstream for programming languages. Go was very much trying to be a "better C", and by that metric, consistent zero-valued initialization is better than having fresh variables be uninitialized.
Go was trying to be a better c++. In c++ there are infinity different constructors and that was too complicated, so they made a language with only one constructor. Go isn't the way it is because nobody knew any better, it's because they deliberately chose to avoid adding things that they thought weren't beneficial enough to justify their complexity.
You're missing the point, Go does not want these QoL features. Arguing about why they are hard to add is pointless because, philosophically, they are undesirable and not going to be accepted.
In Zig you need an allocator to allocate anything, so whenever you need to add some extra information to an error, you pass a diagnostics object as an output argument to a potentially failing function. In this case it becomes a bit harder to compare it to Go's errors, each with pros and cons. I think comparing Go errors to Rust errors would be more fair.
There are some articles about the diagnostic pattern in Zig, e.g. [1], [2]
How so? In Rust you also need an allocator to allocate anything. Zig's diagnostics idiom is just that, an idiom. It would be very weird to do this in Rust, but then it's a pretty weird choice in Zig, they've just decided to do it anyway.
Then as long as your function followed the contract 0+ returns and then 1 `error` return, that could absolutely be turned into just the 0+ returns and auto-return error.
The fact that the `Error` interface is easy to match and extend, plus the common pattern of adding an error as the last return makes this possible.
There's not much broken with the error type itself, but the "real" problem is that the Go team decided not to change the way errors are handled, so it becomes a question of error handling ergonomics.
The article doesn't have a clear focus unfortunately, and I think it's written by an LLM. So I think it's more useful to read the struggles on the Go team's article
Go got a ton right. Especially for being almost 20 years old. But errors is one thing that needs a v2. I love Zig's enumerable errors and deferErr function.
I keep feeling this feeling and it depresses me. I start reading an article and then gradually realise it's a load of AI slop, but by the time I get that realisation I've already wasted several minutes of my life. It's a sinking feeling like I've been duped, but not for anybody's gain - the "author" isn't earning anything from my view, they've just wasted my time for no reason. Even my misfortune is valueless. It happens again and again and again, it's wearing me down.
It's more complicated. There is no single correct way to check for errors. Some standard library functions can return data and an error: https://pkg.go.dev/io#Reader
This is true, but it feels like a mistake. It's too late to change now, of course, but I feel like (0, nil) and (!=0, !=nil) should both have been forbidden. The former is "discouraged" now, at least. It does simplify implementations to allow these cases, but it complicates consumers of the interface, and there are far more of the latter than the former.
In my experience, writing a few lines to handle errors is really not as big of a deal as a lot of people make it out to be. However, I've seen numerous times how error handling can become burdensome in poorly structured codebases that make failure states hard to manage.
Many developers, especially those in a rush, or juniors, or those coming from exception-based languages, tend to want to bubble errors up the call stack without much thought. But I think that's rarely the best approach. Errors should be handled deliberately, and those handlers should be tested. When a function has many ways in which it can fail, I take it as a sign to rethink the design. In almost every case, it's possible to simplify the logic to reduce potential failure modes, minimizing the burden of writing and testing error handling code and thus making the program more robust.
To summarize, in my experience, well-written code handles errors thoughtfully in a few distinct places. Explicit error handling does not have to be a burden. Special language features are not strictly necessary. But of course, it takes a lot of experience to know how to structure code in a way that makes error handling easy.
Sure … it is true that Go errors can carry data, and Zig ones perhaps do not, but I don't see how that is what disqualifies a `try` from being possible. Rust's errors are rich, and Rust had `try!` (which is now just `?`).
The article's reasoning around rich errors seems equally muddled.
> In Zig, there's no equivalent. If both calls fail with error.FileNotFound, the error value alone can't tell you which file was missing.
Which is why I'm not a huge fan of Zig's error type … an int cannot convey the necessary context! (Sometime I'd've thought C had so thoroughly demonstrated as bad with, e.g., `mkdir` telling people "No such file or directory." — yeah, no such directory, that's why I'm calling `mkdir` — that all new designs would not be limited to using integer error codes.)
But then we go for …
> Zig's answer is the Error Return Trace: instead of enriching the error value, the compiler tracks the error's path automatically.
But the error's "path" cannot tell you what file was missing, either…
> It tells you where the error traveled, not what it means. Rather than enriching the error value, Zig enriches the tooling.
Sure … like, again, a true-ish statement (or opinion), but one that just doesn't contribute to the point, I guess? A backtrace is also useful, but having the exact filename is useful, too. It depends on the exact bug that I'm tracking: sometimes the backtrace is what I need, sometimes the name of the missing file is what I need. Having both would be handy, depending on circumstance, and the call stack alone does not tell you the name of the missing file.
… how does either prevent a `try` statement?
We try to argue that somehow the stdlib would need to change, but I do not see how that can be. It seems like Go could add try, as syntactic sugar for the pattern at the top of the article. (& if the resulting types would have type errored before, they could after sugaring, etc.)
Everyone compares Go to Rust. This AI-generated slop mentions Rust at the top, then launches into an explanation of how Go is not like Zig, where Rust is also not like Zig, but instead is extremely like Go. This answers no questions at all about the argument people actually participate in.
Coming from Java/C# with exceptions Go felt like an improvement
Most languages eventually end up confusing 'try-catch', errors, exceptions, handle?, re-throw?... Together with most programmers mixing internal erros, business errors, transient... Creating complex error types with error factories, if and elses... Everything returning the same simple error is simply genious
Also a lot of zig posts are tone def like this. "Oh look something so simple and we're the first to think about it. We must be really good"
"Here's the uncomfortable truth: a try keyword in Go without fixing the error type is just syntax sugar. You'd get slightly less typing but none of the real benefits - no exhaustiveness checking, no compiler-inferred error sets, no guarantee that you've actually handled every case."
... So what? From what I can tell that's all anyone has asked for in the context of something to just return nil/error up the call stack.
Exactly. I don't like that many people say "It's not perfect so it's useless.". I don't want to write or read the `if err != nil` statement over and over again. It is messy. It is tiresome. It could be solved by syntactic sugar.
How utterly arrogant to insist that “every Go developer” wishes the language abandoned its principles in order to add some syntactic sugar to save a few lines of code. No, we don’t all feel a pang of envy at magic keywords that only work in certain function call configurations. Sheesh.
I really hate it when people try to justify Go’s design decisions. Try would be very useful. The real reason why is because the Go team refuses to take any lesson from any other programming language except for C. That’s why there are so many questionable decision. Also I was a little disappointed there was no mention of panics which are one such questionable decision. Also the author stopped trying to cover up their AI written tracks at the conclusion because you already read it so who cares.
These AI written articles carry all the features and appearance of a well reasoned, logical article. But if you actually pause to think through what they're saying the conclusions make no sense.
In this case no, it's not the case that go can't add a "try" keyword because its errors are unstructured and contain arbitrary strings. That's how Python works already. Go hasn't added try because they want to force errors to be handled explicitly and locally.
It is simpler than that. Go hasn't added "try" because, much like generics for a long time, nobody has figured out how to do it sensibly yet. Every proposal, of which there have been many, have all had gaping holes. Some of the proposals have gotten as far as being implemented in a trying-it-out capacity, but even they fell down to scrutiny once people started trying to use it in the real world.
Once someone figures it out, they will come. The Go team has expressed wanting it.
> nobody has figured out how to do it sensibly yet.
In general or specifically in Go?
The mentioned in the article `try` syntax doesn't actually make things less explicit in terms of error handling. Zig has `try` and the error handling is still very much explicit. Rust has `?`, same story.
I just read the article and I didn't get away with that rationale. Now, this isn't to say that I agree with the author. I don't see why go would *have* to add typed error sets to have a try keyword.
Yes, mimicking Zig's error handling mechanics in go is very much impossible at this point, but I don't see why we can't have a flavor of said mechanics.
The programmer is explicitly throwing away the error returned by ReadFile (using the underscore) in the criticism of Go.
Saying that is not explicit is just wrong.I think the argument is that the compiler does not enforce that the error must be checked. It's just a convention. Because you know Go, you know it's convention for the second return value to be an error. But if you don't know Go, it's just an underscore.
In a language like Rust, if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.). It literally won't compile.
When you see .unwrap in Rust code, you know it smells bad. When you see x, _ := in Go code, you know it smells bad.
> But if you don't know Go, it's just an underscore.
And if you don't know rust, .unwrap is just a getter method.
> When you see x, _ := in Go code, you know it smells bad.
What if it’s a function that returns the coordinates of a vector and you don’t care about the y coordinate?
One big difference is that with unwrap in Rust, if there is an error, your program will panic. Whereas in Go if you use the data without checking the err, your program will miss the error and will use garbage data. Fail fast vs fail silently.
But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.
> if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.)
I think you can make the same argument here - rust provides unwrap and if you don’t know go, that’s just how you get the value out of the Result Type.
Go has tools for checking things like this. It's just not in the compiler. If you don't want to enforce that all errors are checked, go doesn't force you to. If you do, it requires you to run an extra tool in your build process.
(Or in your commit hook. If you want to develop without worrying about such things, and then clean it up before checkin, that's a development approach that go is perfectly fine with.)
Criticisms of Go seem like they pivot on the author's understanding of what's being done with `_` and sometimes `nil`. It's a strongly-typed language with a lot of flexibility around type, and that's nice to have when working on edge systems like a data ingester.
My takeaway is that Go almost always prefers simplicity and not so much good software engineering. `nil` without compiler checks is another example, or designing a new language without generics. However the overall simplicity has its own value.
I agree, its strength (beyond goroutines) is that anyone who knows one of the popular languages (Python, Java, etc) can easily translate their idioms and data structures to Go, and the code would remain easy to read even without much Go experience. That's probably one reason why the TypeScript compiler team chose Go.
But this makes the language feel like Python, in some ways. Besides nil, the lack of expressivity in its expressions makes it more idiomatic to write things imperatively with for loops and appending to slices instead of mapping over the slice. Its structurally typed interfaces feel more like an explicit form of duck typing.
Also, Go has generics now, finally.
> But this makes the language feel like Python
From what I remember of a presentation they had on how and why the made Go, this is no coincidence. They had a lot of Python glue code at Google, but had issues running it in production due to mismatched library dependencies, typing bugs, etc. So they made Go to be easy to adopt their Python code to (and especially get the people writing that code to switch), while addressing the specific production issues they faced.
> he popular languages (Python, Java, etc) can easily translate their idioms and data structures to Go, and the code would remain easy to read even without much Go experience
disagree, they made many decisions which are different from mainstream: OOP, syntax as examples.
Sure, the syntax is unique, but it's fairly easy to get over that. I guess I'm comparing to Rust, where not only is syntax different, but data structures like a tree with parent references aren't as straightforward (nor idiomatic), and there's a lot more explicit methods that requires knowing which are important and which are just noise (e.g. unwrap, as_ref).
I would argue that after a short tutorial on basic syntax, it's easier for a Python/JavaScript programmer to understand Go code than Rust.
to me Rust syntax is less alienating, they adapted ML syntax which is probably second most popular(scala, typescript, kotlin) after C style syntax, while Go from whatever reasons got something totally new.
It's a very improved 1960s language.
For some uses, that's all you need, and having more features often detract from your experience. But I'm doubtful on how often exactly, I have been able to carve out a simple sub-language that is easier to use than go from every stack that I've tried.
I think of it as a bit like Python with stronger types.
I'm not convinced that you couldn't have good software engineering support and simplicity in the same language. But for a variety of mostly non-technical reasons, no mainstream language provides that combination, forcing developers to make the tradeoff that they perceive as suiting them best.
The Go team has discussed syntactic sugar for error handling many times. They're not against it, they're just holding out for a proposal that checks a lot of boxes and makes everyone happy, which hasn't happened yet.
I they wanted error handling they would have thought for a bit then picked a good enough solution. "We only want 'X but perfect'" is the same as "we don't want X".
Example: https://go.dev/blog/error-syntax
It's a settled matter at this point for the foreseeable future
https://go.dev/blog/error-syntax
tl;dr - proposals are no longer being considered
The author doesn't touch on it, but the bigger problem with things like Foo|Bar as an actual type (rather than as a type constraint) is that every type must have a default/zero value in Go. This has proven to be a difficult problem for all of the proposals around adding sum types, whether they're used as errors or otherwise. For example, to support interface{Foo|Bar} as a reified type, you'd have to tolerate the nil case, which means you either couldn't call any methods even if they're common to Foo and Bar, or else the compiler would have to synthesize some implementation for the nil case, which would probably have to just panic anyway. And an exhaustive type-switch would always have to have "case nil:" (or "default:") and so on.
> every type must have a default/zero value in Go
Hot take, maybe, but this is one of the few "mistakes" I see with Go. It makes adding QoL things like you mentioned difficult, requires shoehorning pointers to allow for an unset condition, some types don't have a safe default/zero value like maps, and makes comparisons (especially generic) overly complex.
Go specifically does not want to add QoL things because it means the compiler team has to spend time implementing that extra syntax and semantics versus making a minimal set of features better.
The problem with the zero value business is that it also makes adding these QoL things in libraries difficult or outright impossible. Case in point, I tried building a library for refinement types, so you can have a newtype like,
except you write it like (abridged) and that enforces an invariant through the type system. In this case, any instance of type AccountName needs to hold a string conforming to a certain regular expression. (Another classical example would be "type DiceRoll int" that is restricted to values 1..6.)But then you run into the problem with the zero value, where the language allows you to say
and now you have an illegal instance floating around (assuming for the sake of argument that the empty string is not a legal account name). You can only really guard against that at runtime, by panic()ing on access to a zero-valued AccountName. Arguably, this could be guarded against with test coverage, but the more insidious variant is When you json.Unmarshal() into that, and the payload does not contain any mention of the "name" field, then AccountName is zero-valued and does not have any chance of noticing. The only at least somewhat feasible solution that I could see was to have a library function that goes over freshly unmarshaled payloads and looks for any zero-valued instances of any refined.Scalar type. But that gets ugly real quick [1], and once again, it requires the developer to remember to do this.[1] https://github.com/majewsky/gg/blob/refinement-types-4/refin...
So yeah, I do agree that zero values are one of the language's biggest mistakes. But I also agree that this is easier to see with 20 years of hindsight and progress in what is considered mainstream for programming languages. Go was very much trying to be a "better C", and by that metric, consistent zero-valued initialization is better than having fresh variables be uninitialized.
Go was trying to be a better c++. In c++ there are infinity different constructors and that was too complicated, so they made a language with only one constructor. Go isn't the way it is because nobody knew any better, it's because they deliberately chose to avoid adding things that they thought weren't beneficial enough to justify their complexity.
You're missing the point, Go does not want these QoL features. Arguing about why they are hard to add is pointless because, philosophically, they are undesirable and not going to be accepted.
In Zig you need an allocator to allocate anything, so whenever you need to add some extra information to an error, you pass a diagnostics object as an output argument to a potentially failing function. In this case it becomes a bit harder to compare it to Go's errors, each with pros and cons. I think comparing Go errors to Rust errors would be more fair.
There are some articles about the diagnostic pattern in Zig, e.g. [1], [2]
[1] https://github.com/ziglang/zig/issues/2647#issuecomment-5898...
[2] https://mikemikeb.com/blog/zig_error_payloads/
How so? In Rust you also need an allocator to allocate anything. Zig's diagnostics idiom is just that, an idiom. It would be very weird to do this in Rust, but then it's a pretty weird choice in Zig, they've just decided to do it anyway.
What is broken about the go error type? If anything the fact that it is a simple interface makes the `try` syntax sugar more doable – right?
Let's say you have this:
``` part, err := doSomething() if err != nil { return nil, err }
data, err := somethingElse(part) if err != nil { return nil, err }
return data, nil ```
Then as long as your function followed the contract 0+ returns and then 1 `error` return, that could absolutely be turned into just the 0+ returns and auto-return error.
The fact that the `Error` interface is easy to match and extend, plus the common pattern of adding an error as the last return makes this possible.
What am I missing here?
> What is broken about the go error type?
There's not much broken with the error type itself, but the "real" problem is that the Go team decided not to change the way errors are handled, so it becomes a question of error handling ergonomics.
The article doesn't have a clear focus unfortunately, and I think it's written by an LLM. So I think it's more useful to read the struggles on the Go team's article
https://go.dev/blog/error-syntax
Go got a ton right. Especially for being almost 20 years old. But errors is one thing that needs a v2. I love Zig's enumerable errors and deferErr function.
I keep feeling this feeling and it depresses me. I start reading an article and then gradually realise it's a load of AI slop, but by the time I get that realisation I've already wasted several minutes of my life. It's a sinking feeling like I've been duped, but not for anybody's gain - the "author" isn't earning anything from my view, they've just wasted my time for no reason. Even my misfortune is valueless. It happens again and again and again, it's wearing me down.
What are we doing here?
It's more complicated. There is no single correct way to check for errors. Some standard library functions can return data and an error: https://pkg.go.dev/io#Reader
This is true, but it feels like a mistake. It's too late to change now, of course, but I feel like (0, nil) and (!=0, !=nil) should both have been forbidden. The former is "discouraged" now, at least. It does simplify implementations to allow these cases, but it complicates consumers of the interface, and there are far more of the latter than the former.
In my experience, writing a few lines to handle errors is really not as big of a deal as a lot of people make it out to be. However, I've seen numerous times how error handling can become burdensome in poorly structured codebases that make failure states hard to manage.
Many developers, especially those in a rush, or juniors, or those coming from exception-based languages, tend to want to bubble errors up the call stack without much thought. But I think that's rarely the best approach. Errors should be handled deliberately, and those handlers should be tested. When a function has many ways in which it can fail, I take it as a sign to rethink the design. In almost every case, it's possible to simplify the logic to reduce potential failure modes, minimizing the burden of writing and testing error handling code and thus making the program more robust.
To summarize, in my experience, well-written code handles errors thoughtfully in a few distinct places. Explicit error handling does not have to be a burden. Special language features are not strictly necessary. But of course, it takes a lot of experience to know how to structure code in a way that makes error handling easy.
The plot seems to get real lost in the article.
Sure … it is true that Go errors can carry data, and Zig ones perhaps do not, but I don't see how that is what disqualifies a `try` from being possible. Rust's errors are rich, and Rust had `try!` (which is now just `?`).
The article's reasoning around rich errors seems equally muddled.
> In Zig, there's no equivalent. If both calls fail with error.FileNotFound, the error value alone can't tell you which file was missing.
Which is why I'm not a huge fan of Zig's error type … an int cannot convey the necessary context! (Sometime I'd've thought C had so thoroughly demonstrated as bad with, e.g., `mkdir` telling people "No such file or directory." — yeah, no such directory, that's why I'm calling `mkdir` — that all new designs would not be limited to using integer error codes.)
But then we go for …
> Zig's answer is the Error Return Trace: instead of enriching the error value, the compiler tracks the error's path automatically.
But the error's "path" cannot tell you what file was missing, either…
> It tells you where the error traveled, not what it means. Rather than enriching the error value, Zig enriches the tooling.
Sure … like, again, a true-ish statement (or opinion), but one that just doesn't contribute to the point, I guess? A backtrace is also useful, but having the exact filename is useful, too. It depends on the exact bug that I'm tracking: sometimes the backtrace is what I need, sometimes the name of the missing file is what I need. Having both would be handy, depending on circumstance, and the call stack alone does not tell you the name of the missing file.
… how does either prevent a `try` statement?
We try to argue that somehow the stdlib would need to change, but I do not see how that can be. It seems like Go could add try, as syntactic sugar for the pattern at the top of the article. (& if the resulting types would have type errored before, they could after sugaring, etc.)
Is it because “do or do not, there is no try”?
Everyone compares Go to Rust. This AI-generated slop mentions Rust at the top, then launches into an explanation of how Go is not like Zig, where Rust is also not like Zig, but instead is extremely like Go. This answers no questions at all about the argument people actually participate in.
Why Article By AI?
Coming from Java/C# with exceptions Go felt like an improvement
Most languages eventually end up confusing 'try-catch', errors, exceptions, handle?, re-throw?... Together with most programmers mixing internal erros, business errors, transient... Creating complex error types with error factories, if and elses... Everything returning the same simple error is simply genious
Also a lot of zig posts are tone def like this. "Oh look something so simple and we're the first to think about it. We must be really good"
"Here's the uncomfortable truth: a try keyword in Go without fixing the error type is just syntax sugar. You'd get slightly less typing but none of the real benefits - no exhaustiveness checking, no compiler-inferred error sets, no guarantee that you've actually handled every case."
... So what? From what I can tell that's all anyone has asked for in the context of something to just return nil/error up the call stack.
Exactly. I don't like that many people say "It's not perfect so it's useless.". I don't want to write or read the `if err != nil` statement over and over again. It is messy. It is tiresome. It could be solved by syntactic sugar.
llm garbage
agreed, the author should be upfront that it was written by an LLM
The llm detector in my brain went off too
How utterly arrogant to insist that “every Go developer” wishes the language abandoned its principles in order to add some syntactic sugar to save a few lines of code. No, we don’t all feel a pang of envy at magic keywords that only work in certain function call configurations. Sheesh.
I really hate it when people try to justify Go’s design decisions. Try would be very useful. The real reason why is because the Go team refuses to take any lesson from any other programming language except for C. That’s why there are so many questionable decision. Also I was a little disappointed there was no mention of panics which are one such questionable decision. Also the author stopped trying to cover up their AI written tracks at the conclusion because you already read it so who cares.
> the Go team refuses to take any lesson from any other programming language except for C.
Go acknowledges taking the design of the object file format from, IIRC, Modula-2. You are very wrong.