• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

They only told us about smart pointers in the last lecture or the one before last, it would have saved us so much headache (even if we can't use the libraries, it's easy to implement yourself). But I guess memory management is part of the syllabus.

It's not really a library. Did you use cin and cout for i/o? Because that's the same "library", it's part of the c++ language
 

Somnid

Member
I don't know Rust, but I'm skeptical about this claim. Are you saying every memory access is guarded by a lock? Even if that's true (which is very unlikely) It's still possible to have a race. Detecting race conditions is equivalent to the halting problem, so how can it provably prevent them?

It's not at runtime, it simply will not let you compile code in which such things are possible, it forces you to be very explicit about who can access memory, everything is immutable by default, bindings can only be mutated by their single owner. Any ambiguity is a compile error.
 

Koren

Member
Sure, but getting high performance out of C++ isn't really the problem most people have with it
Indeed.

But even outside of performance, understanding how C++ handle pass-by-value can avoid a lot of trouble. Granted, with C++14, we're closer to sanity, but you can still easily stumble into big problems since the language allows you to do pretty much what you want with constructors and destructors.

Half a dozen years ago, I was trying to get used to Rule of Three. It changes each time they improve the norm, and code you'll find online (and also courses/tutorials/books) will gives you a patchwork of strategies, resulting in really strange code at the end, I fear.

so I assume that's not what the poster was referring to. More likely it was about memory leaks, memory corruption, managing memory so it's deleted at the right time, that kind of thing.
Yes... But memory corruption can come back in force because of classes copies if you're not really, really careful.

Those kinds of issues are almost non-existant in modern C++.
Assuming modern = strict X14, I'd say you're right, but that's assuming you know how to code correctly in X14 (I've yet to find a good book, especially for newcomers, and I don't even talk about a book in my native language) and you don't interact with non-X14 code...

everywhere in your code, and fixing compiler errors that result to make it work. In the process you'll learn about move semantics as well.

From there you can audit your codebase for uses of new, and think "how can I eliminate this new?" You will almost always find a way, and probably learn a bunch of stuff along the way
You'll also learn a lot about:
- finding the very last version of your compiler
- discovering the small hidden option (and include) that'll make make_unique work
- how to (not) decipher totally unreadable errors (the more sane C++ become, the most unreadable the errors messages become, it seems)
 
And null pointer exceptions, too. Do not exist in Rust.
I don't know Rust, but I'm skeptical about this claim. Are you saying every memory access is guarded by a lock? Even if that's true (which is very unlikely) It's still possible to have a race. Detecting race conditions is equivalent to the halting problem, so how can it provably prevent them?
There are caveats, so your skepticism is not misplaced. The first is that there isn't allowed to be more than one mutable pointer toward the same memory at a time. So this makes sure that you can't invalidate an iterator by accident, and it also means you can never cause a data race because only one mutable pointer is ever lying around.

The second is a clever use of interfaces. There are two empty interfaces that the compiler knows about: Send and Sync. Send means that a piece of data is allowed to cross thread boundaries because it doesn't refer to TLS. Sync means that a datastructure can be accessed from multiple threads simultaneously because its methods are thread safe. The compiler enforces these by requiring these interfaces on any data type you try to use or access in those ways. So you can't accidentally share your code that isn't thread safe across threads.

The third is the escape hatch. Rust has an unsafe keyword just like c#. So if you wanted to write a fancier datastructure like a mutex, you would write code that uses unsafe sparingly, and then encapsulate all of that behind a safe method boundary. And hopefully, it truly is safe. Unsafe behavior that happens without using unsafe code is considered a bug in Rust and is always treated as such.

What this means for the end user is that if they never use the unsafe keyword, they are guaranteed by the rust compiler and ecosystem to not encounter data races or unsafe memory behavior. You cannot accidentally create an unsafe data structure that will be used across threads boundaries, because the requirement for doing so is to tell the compiler "this data structure is thread safe". I.e. it is always a conscious decision of the programmer.

Oh, and a handy result of this is that you lock data, not code. Clever use of RAII means you can never forget to unlock a datastructure at the end of a critical section.

If you think this is too stringent, there are already tons of libraries in the wild that allow you to do general task parallelism, data parallelism, concurrent message passing, use mutexes etc. all without having to think about data races. It becomes a non issue for anyone who chooses not to think about it but wants to use those features. So simple concurrency and parallelism becomes fearless and easy, and the language gives you unbeatable tools for building more advanced use cases like lockfree queues. And also, two pieces of code that are safe on their own, written by separate programmers, can't come together and become unsafe because certain things weren't considered.

These defaults and features actually make half of Valgrind meaningless or irrelevant in Rust, which really surprised me.
 
Indeed.

But even outside of performance, understanding how C++ handle pass-by-value can avoid a lot of trouble. Granted, with C++14, we're closer to sanity, but you can still easily stumble into big problems since the language allows you to do pretty much what you want with constructors and destructors.

Half a dozen years ago, I was trying to get used to Rule of Three. It changes each time they improve the norm, and code you'll find online (and also courses/tutorials/books) will gives you a patchwork of strategies, resulting in really strange code at the end, I fear.


Yes... But memory corruption can come back in force because of classes copies if you're not really, really careful.


Assuming modern = strict X14, I'd say you're right, but that's assuming you know how to code correctly in X14 (I've yet to find a good book, especially for newcomers, and I don't even talk about a book in my native language) and you don't interact with non-X14 code...


You'll also learn a lot about:
- finding the very last version of your compiler
- discovering the small hidden option (and include) that'll make make_unique work
- how to (not) decipher totally unreadable errors (the more sane C++ become, the most unreadable the errors messages become, it seems)

I think error messages have come a long way. One thing i wish they'd do is reverse the order of messages. When you have an error in template code, it prints the error at the site first, then the function it was called from, continuing all the way up to the place you wrote the code. But that's usually what you want, so you have to ignore the first 20 or 30 lines until you find the line that points to the code you wrote.
 

Koren

Member
That kind of logic doesn't apply to homework limitations, there's a list of things you can use and the rest you can't.
I really think that wondering whether you can use of unique_ptr make as much sense as wondering whether you can use "while". It's at the core of the language now, and a good habit to take.

I'd be curious to see the "list"...

We were using some old version of C++ anyway, not sure if it had those pointers
IIRC, shared_ptr, unique_ptr and make_shared are C++X11, make_unique is X14.

auto_ptr (now deprecated) can be a replacement if unique_ptr is unavailable. It's C++98, so even with a VERY old compiler, it should be available.

I think error messages have come a long way.
Well, the more you use modern functions, the more they're linked to templates, and verbosity increase...

This is what I get if I miss the liaison with C++ standard library:

Code:
/tmp/ccq8d3Hg.o: In function `std::_MakeUniq<int>::__single_object std::make_unique<int, int>(int&&)':
unique.cxx:(.text._ZSt11make_uniqueIiIiEENSt9_MakeUniqIT_E15__single_objectEDpOT0_[_ZSt11make_uniqueIiIiEENSt9_MakeUniqIT_E15__single_objectEDpOT0_]+0x25): undefined reference to `operator new(unsigned long)'
/tmp/ccq8d3Hg.o: In function `std::default_delete<int>::operator()(int*) const':
unique.cxx:(.text._ZNKSt14default_deleteIiEclEPi[_ZNKSt14default_deleteIiEclEPi]+0x18): undefined reference to `operator delete(void*)'
/tmp/ccq8d3Hg.o:(.eh_frame+0x4b): undefined reference to `__gxx_personality_v0'
collect2: error: ld returned 1 exit status
I think it can scare newcomers...
 
I really think that wondering whether you can use of unique_ptr make as much sense as wondering whether you can use "while". It's at the core of the language now, and a good habit to take.

I'd be curious to see the "list"...


IIRC, shared_ptr, unique_ptr and make_shared are C++X11, make_unique is X14.

auto_ptr (now deprecated) can be a replacement if unique_ptr is unavailable. It's C++98, so even with a VERY old compiler, it should be available.


Well, the more you use modern functions, the more they're linked to templates, and verbosity increase...

This is what I get if I miss the liaison with C++ standard library:

Code:
/tmp/ccq8d3Hg.o: In function `std::_MakeUniq<int>::__single_object std::make_unique<int, int>(int&&)':
unique.cxx:(.text._ZSt11make_uniqueIiIiEENSt9_MakeUniqIT_E15__single_objectEDpOT0_[_ZSt11make_uniqueIiIiEENSt9_MakeUniqIT_E15__single_objectEDpOT0_]+0x25): undefined reference to `operator new(unsigned long)'
/tmp/ccq8d3Hg.o: In function `std::default_delete<int>::operator()(int*) const':
unique.cxx:(.text._ZNKSt14default_deleteIiEclEPi[_ZNKSt14default_deleteIiEclEPi]+0x18): undefined reference to `operator delete(void*)'
/tmp/ccq8d3Hg.o:(.eh_frame+0x4b): undefined reference to `__gxx_personality_v0'
collect2: error: ld returned 1 exit status
I think it can scare newcomers...

I don't use gcc, is this the result of not passing -std=c++11, or something else?
 

JeTmAn81

Member
Sure, but getting high performance out of C++ isn't really the problem most people have with it, so I assume that's not what the poster was referring to. More likely it was about memory leaks, memory corruption, managing memory so it's deleted at the right time, that kind of thing.

Those kinds of issues are almost non-existant in modern C++.

You can start by just making the following trivial change
Code:
// old way
Foo *f = new Foo(1, 2, 3);

// new way
auto f = std::make_unique<Foo>(1, 2, 3);

everywhere in your code, and fixing compiler errors that result to make it work. In the process you'll learn about move semantics as well.

From there you can audit your codebase for uses of new, and think "how can I eliminate this new?" You will almost always find a way, and probably learn a bunch of stuff along the way

Isn't this basically garbage collection in C++? Seems blasphemous.

Anyway, this appears to be similar to the factory pattern which helps you decouple instantiation of objects from the code that uses them. It's quite handy.

https://en.m.wikipedia.org/wiki/Factory_method_pattern
 
Isn't this basically garbage collection in C++? Seems blasphemous.

Anyway, this appears to be similar to the factory pattern which helps you decouple instantiation of objects from the code that uses them. It's quite handy.

https://en.m.wikipedia.org/wiki/Factory_method_pattern

Not really, you still have explicit control over when the memory is reclaimed by controlling the scope in which it's declared.

It's not really a factory pattern so much as it is making safe usage opt out rather than opt in.

With a factory pattern you can define different factories that do different things, but that's not really the point of this, the point is to design away the possibility of memory leaks and use after free unless you go out of your way to opt out of that
 

Kieli

Member
As someone who is very new to software dev, I want to get all of y'alls wisdom on technology and languages and tools and outlook.

I'm afraid to get into mobile development as the Android and iOS app scene hardly seems stable. That being said, if the tools used to code said apps are easily transferable to more traditional applications, then that isn't so bad.

I've heard mixed reactions about web development. But it seems that that is here to stay. Since I hardly know anything about front vs. back end, I'll probably go full-stack.

Any thoughts on QA engineering? It seems a bit too low on technical skills, if there is even light scripting and automation to begin with. I'm also foresee this position as the first to be outsourced (the non-technical QA has already been outsourced). Even if I do this for a few years, it'll only be temporary before I make a transition to dev. It's just that I may carry a stigma among employers who see my skillset as being irrelevant for dev.

As for languages, I'm currently going to continue with Java, C++. I plan to learn some Python, but beyond that, I don't really know.

I'd to add some personal projects to my portfolio, and these books seem to garner good ratings:

Web Design with HTML, CSS, JavaScript and jQuery Set by Jon Duckett
JavaScript: The Good Parts by Douglass Crockford
PHP and MySQL Web Development by Luke Welling & Laura Thomson

With these books, my hope is to gain some broad exposure to basic notions such as front-end web developing, some scripting languages, back-end tech like SQL.

I'd also like to learn more about network and wireless protocols, but I think that'll have to wait.

Edit: I wonder if I should also invest in Design Patterns by the gang of four, but I dunno if I can appreciate it at my current skill level. Perhaps a lighter introduction into design patterns would be appropriate?
 

Granadier

Is currently on Stage 1: Denial regarding the service game future
iOS and Android are both stable and in demand. iOS more so.

What I get from your post though is that you are spreading yourself too thing, mentally. Try to focus on one language, two at the most. Learn that language's syntax and then begin to learn how to apply it to programs and patterns.

Once you are comfortable with that you will be in a much better position to choose your main focus (if that ends up being the same thing you've been doing than great!). As a beginner though you should be focused less on the far away goal (working in industry) and more on the close goal of becoming a competent programmer.
 

Kieli

Member
I suggest avoiding QA. You may not even be doing any scripting at all, and even if you are doing it it will likely be secondary to your standard role.

How about for an internship? I definitely plan to work in dev as soon as possible, but I want to build some experience before I apply in earnest.

Edit: I've heard of Head First. I'll check it out!
 

Kalnos

Banned
As someone who is very new to software dev, I want to get all of y'alls wisdom on technology and languages and tools and outlook.

If you're interested in frontend web dev then you want to learn vanilla JavaScript. If you have a good grasp on JavaScript then you will be able to pick up new libraries/frameworks (Angular, Ember, JQuery, w/e) pretty quickly. Honestly the JavaScript scene changes so quickly that it's important to have the basics locked down. I would spend some time in order to really understand CSS as well, people do some really hacky stuff with it to make it do what they want in my experience.

What backend language you learn doesn't matter too much IMO just make sure you have a solid grasp on whatever you decide to learn. C#/Java will always have high demand but smaller companies/startups often use newer/different technology like Node, Go, etc.

Definitely learn SQL, don't get sucked into using Mongo for simplicity.

How about for an internship? I definitely plan to work in dev as soon as possible, but I want to build some experience before I apply in earnest.

No reason to avoid a dev role to be honest. I knew absolutely nothing about SQL for instance when I got my co-op and I learned a fuck load on the job. Definitely more than I ever learned in college.
 

Kieli

Member
No reason to avoid a dev role to be honest. I knew absolutely nothing about SQL for instance when I got my co-op and I learned a fuck load on the job. Definitely more than I ever learned in college.

It's not so much that I'm avoiding dev, but I already have an offer for a QA role (which I haven't accepted yet). So I'm debating whether I should accept it, and learn on my own time. Or reject it, and try to find a dev internship that I'm likely underqualified for (I've interviewed for a few coding internships, but haven't landed a single offer).
 

Koren

Member
I don't use gcc, is this the result of not passing -std=c++11, or something else?
No, it's a linkage problem, not a compilation one. gcc compile C++ programs, but link them with libC not libC++. It's the result of not passing -lstdc++ (or using g++ instead of gcc).

Isn't this basically garbage collection in C++?
No more than the fact that an int or an int[] get deleted when you exit the scope...

I could understand how shared_ptr may sound strange en C since you let the compiler/code do the reference counting instead of you, but the deletion still happen just after when you delete one of the shared_ptr.

The issue with garbage collection isn't the fact that it remove trash objects automatically, it's the fact that it does it when it want, removing some control from the programmer (and possibly creating moments where the program "stop")
 
As someone who is very new to software dev, I want to get all of y'alls wisdom on technology and languages and tools and outlook.

I'm afraid to get into mobile development as the Android and iOS app scene hardly seems stable. That being said, if the tools used to code said apps are easily transferable to more traditional applications, then that isn't so bad.

I've heard mixed reactions about web development. But it seems that that is here to stay. Since I hardly know anything about front vs. back end, I'll probably go full-stack.
Learn Java/C# if you want stable jobs that might be seen as boring and unsexy but bring a steady paycheck. Good if you want to settle down and thinking about starting a family. Both languages are here to stay for god knows how long, so no worries your skills will soon be obsolete.

Learn Python and Javascript for the sexy startup jobs at companies that promise you they will become the next facebook and will pay you in stock that might be worth nothing. Also be prepared to work insane hours. Good if you are in your twenties with no obligations.

Whatever you do, learn the basics of SQL and databases. Postgres is probably the best bet, unless you decide to learn C#, then go for SQL Server.
 

Ambitious

Member
Oh boy. Gonna have my first job interview next Wednesday. I hope I can keep my nerves under control, otherwise it's gonna be an embarrassing disaster.

It was alright. I was nervous and said a few stupid things, but it wasn't as bad as it could have been. They think I'd be a good fit for the company, they said, and I have a "high chance" to be hired. They're gonna tell me their decision next Wednesday.

The company sounded pretty cool. Personal atmosphere, flat hierarchy, training opportunities, a lot of freedom, nice perks, and the best view in town. It's mainly Java Enterprise stuff, but they also use other languages and technologies from time to time. I'm fine with that.

Well, now it's Monday afternoon the week after. No call. I'm getting antsy.
 

Zoe

Member
It's not so much that I'm avoiding dev, but I already have an offer for a QA role (which I haven't accepted yet). So I'm debating whether I should accept it, and learn on my own time. Or reject it, and try to find a dev internship that I'm likely underqualified for (I've interviewed for a few coding internships, but haven't landed a single offer).

If you need a job, just take it and keep looking on the side.
 

Granadier

Is currently on Stage 1: Denial regarding the service game future
Learn Java/C# if you want stable jobs that might be seen as boring and unsexy but bring a steady paycheck. Good if you want to settle down and thinking about starting a family. Both languages are here to stay for god knows how long, so no worries your skills will soon be obsolete.

Learn Python and Javascript for the sexy startup jobs at companies that promise you they will become the next facebook and will pay you in stock that might be worth nothing. Also be prepared to work insane hours. Good if you are in your twenties with no obligations.

Whatever you do, learn the basics of SQL and databases. Postgres is probably the best bet, unless you decide to learn C#, then go for SQL Server.

This post is so jaded, haha.
 
Well, now it's Monday afternoon the week after. No call. I'm getting antsy.

Send an e-mail, be polite. Just follow up and let them know you're still interested in the position.

The delay could be for any number of things. I personally err on the pessimistic side (I am an engineer, I am not in sales), but there typically is some work related inefficiency that's responsible for dragging things out. Not anything personal.

No more C/C++ love? T_T

Oh it's certainly out there! Only, it might not be a good first choice for server side programming unless you have good people who don't just think they know what they're doing.
 

komplanen

Member
Question

char stuff[2][3] = { data };

&stuff = address of where array starts, right?
&stuff[0] = first array's start address?
&stuff[3] = second array's start address?
 
Question

char stuff[2][3] = { data };

&stuff = address of where array starts, right?
&stuff[0] = first array's start address?
&stuff[3] = second array's start address?

This is easy to check with a debugger.

IRRWGh7.png


From this, can you confirm or deny your hypothesis?
 

Koren

Member
Question

char stuff[2][3] = { data };

&stuff = address of where array starts, right?
Funilly enough, I can't find a reference (pun not intended) that confirms it works...

stuff by itself is already the adress where the array start.

&stuff[0] = first array's start address?
Same problem... stuff (and stuff[0] ) already are the "first" array start address.

&stuff[0][0] too.

But I think &stuff[0] will work, although it "sounds strange" to me, and I can't find a confirmation :/

&stuff[3] = second array's start address?
[1], definitively not [3]

and stuff[1] or &stuff[1][0], even if &stuff[1] may work (again, for the same reason)
 

Koren

Member
This is easy to check with a debugger.
It's easy to check that something doesn't work, but when it seems to work, I still want to understand why.

I can't find what happen in the simple case of using &a when a is declared as int a[10]... I expect the & to do nothing, but I'd like to see written somewhere. Already checked several books (unfortunately, I don't have many on hand), such as K&R, and I can't find anything on this.

Can you help me getting some sleep this night?
 

Koren

Member
Wouldn't the address for a multi dim array be &&stuff?
2D arrays in C are just 1D arrays in memory...

When you write
Code:
int t[2][3];
you actually get a 1D table with 2 elements that are int[3].

For exemple, if t[j] = 10*i + j, you get

Code:
t, t[0] or &t[0][0]  ->  0x76534 : 00  <- t[0][0]
                     ->  0x76538 : 01  <- t[0][1]
                     ->  0x7653C : 02  <- t[0][2]
t[1] or &t[1][0]     ->  0x76540 : 10  <- t[1][0]
                     ->  0x76544 : 11  <- t[1][1]
                     ->  0x76548 : 12  <- t[1][2]


Some misunderstanding come from the fact that t[j] can be used when t has been declared as
Code:
int t[2][3];
or
Code:
int t[][3];

Or when t has been declared as
Code:
int* t[2];

In the second case, it's not really a 2D array, even if it can behave like one. In the second case, lines can be separated in memory, intersect, have different length (since there's no actual length, the programmer is responsible of what is stored in the int*)...


But in both cases, I don't see how &&t could have some meaning (even if t is not an address, &t is, so &(&t) doesn't seems meaningful)

Granted, that's true as long as you don't meet a crazy guy that thought that overloading operator& was a great idea...
 
It's easy to check that something doesn't work, but when it seems to work, I still want to understand why.

I can't find what happen in the simple case of using &a when a is declared as int a[10]... I expect the & to do nothing, but I'd like to see written somewhere. Already checked several books (unfortunately, I don't have many on hand), such as K&R, and I can't find anything on this.

Can you help me getting some sleep this night?

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3690.pdf

1.8.6 Unless an object is a bit-field or a base class subobject of zero size, the address of that object is the address of the first byte it occupies. Two objects that are not bit-fields may have the same address if one is a subobject of the other, or if at least one is a base class subobject of zero size and they are of different types; otherwise, they shall have distinct addresses.

4.2 Array-to-pointer conversion [conv.array]
1 An expression of type &#8220;array of N T&#8221;, &#8220;array of runtime bound of T&#8221;, or &#8220;array of unknown bound of T&#8221; can be converted to a prvalue of type &#8220;pointer to T&#8221;. The result is a pointer to the first element of the array

In the case of "int a[10]", these two show directly that a == &a. For the first element, obviously the first byte is part of the representation of the first element, so again by 1.8.6, &a[0] is a pointer to the first byte of a[0], and thus is equal to &a and a.
 

Ambitious

Member
It's easy to check that something doesn't work, but when it seems to work, I still want to understand why.

I can't find what happen in the simple case of using &a when a is declared as int a[10]... I expect the & to do nothing, but I'd like to see written somewhere. Already checked several books (unfortunately, I don't have many on hand), such as K&R, and I can't find anything on this.

Can you help me getting some sleep this night?

a is the same as a[0]. a[0] is the same as *(a+0). Thus, &a = &a[0] = &(*(a+0)) = a+0 = a.
 

Koren

Member
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3690.pdf

In the case of "int a[10]", these two show directly that a == &a. For the first element, obviously the first byte is part of the representation of the first element, so again by 1.8.6, &a[0] is a pointer to the first byte of a[0], and thus is equal to &a and a.
Thanks...

I couldn't see how &a could be different from a, but it's interesting to see how it works.

I still think it's a bit strange to use & on a...

(and if you're picky, they're not totally "equals" (they only dereference the same way, if I read correctly), since they're of different type, even if they're both pointers and hold the same adress... the compiler will refuse a == &a)

a is the same as a[0]. a[0] is the same as *(a+0). Thus, &a = &a[0] = &(*(a+0)) = a+0 = a.
Nice...

That could be convincing, but with the same proof, you could expect that either &(&a) or &&a would work.

&(&a) = &(&a[0]) = &(&(*(a+0))) = &(&(*(a+0))) = &(a+0) = &a = &a[0] = &(*(a+0)) = a+0 = a

They're not, though...


Edit : can we at least all agree on the fact that the address of the second "line" is &0[t]+1 ? ^_^
 
Thanks...

I couldn't see how &a could be different from a, but it's interesting to see how it works.

I still think it's a bit strange to use & on a...

(and if you're picky, they're not totally "equals", since they're of different type, even if they're both pointers and hold the same adress... the compiler will refuse a == &a)


Nice...

That could be convincing, but with the same proof, you could expect that either &(&a) or &&a would work.

&(&a) = &(&a[0]) = &(&(*(a+0))) = &(&(*(a+0))) = &(a+0) = &a = &a[0] = &(*(a+0)) = a+0 = a

They're not, though...

That's because you can't take the address of an rvalue.
 

komplanen

Member
Thanks for all the help and interesting discussion :)

Where is a good source on how to learn the Visual Studio debugger on issues like this?
 
Hey guys, looking for advice on a good book to learn Java.

I used to program in Java in college years ago so I do understand the basic gist but it's been a long time and now I think I need to get back into it. I can program in Python and a little Javascript too.

I have purchased paperback "The Pragmatic Programmer" and "Growing Object-Oriented Software Guided by Tests" from Amazon so I'll have them I think by the middle of next month.

I have "Clean Code" by Robert C Martin in PDF format and also Gang of Four Design Patterns in PDF.

So far I haven't read any of these yet so I'll spend the next year or so going through them.

Can anyone recommend a good general book for Java or I suppose what is generally considered the "best" Java book? I suppose I'm sort of a beginner because I haven't used it in a long time and only used it to a basic level.
 
Thanks for all the help and interesting discussion :)

Where is a good source on how to learn the Visual Studio debugger on issues like this?

The basics are pretty simple, so I would just read the MSDN documentation.

https://msdn.microsoft.com/en-us/library/sc65sadd.aspx

It uses C# for all the examples, but the concepts are basically the same, with a few exceptions. There are some really good books about debugging, but most of them are a bit more advanced. If your question is "How do I use this thing?" then MSDN documentation is probably the best place to start.
 

Makai

Member
Hey guys, looking for advice on a good book to learn Java.

I used to program in Java in college years ago so I do understand the basic gist but it's been a long time and now I think I need to get back into it. I can program in Python and a little Javascript too.

I have purchased paperback "The Pragmatic Programmer" and "Growing Object-Oriented Software Guided by Tests" from Amazon so I'll have them I think by the middle of next month.

I have "Clean Code" by Robert C Martin in PDF format and also Gang of Four Design Patterns in PDF.

So far I haven't read any of these yet so I'll spend the next year or so going through them.

Can anyone recommend a good general book for Java or I suppose what is generally considered the "best" Java book? I suppose I'm sort of a beginner because I haven't used it in a long time and only used it to a basic level.
Head First Java
 
I'll take an advance reference if there's one you'd recommand.

I'm used to debuggers, but I've never thought about buying a book on this topic.

My two recommendations are windows specific, but they are here:

https://www.amazon.com/dp/0735662789/?tag=neogaf0e-20

https://www.amazon.com/dp/0321374460/?tag=neogaf0e-20

The second is more advanced (and also better, for that reason). There's a number of books that focus on other debuggers and other platforms, but I haven't read them. I think most of the techniques (especially in the second book) are applicable on other platforms if you have a good enough knowledge of the debugger's command set
 
Top Bottom