• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Water

Member
This is basically how I feel about programming environments at this point too. We are a windows shop at work and I find it easily usable and it has its own benefits. At home I use ubuntu for personal and hobby projects. There is virtually no difference in usability or one that I'd put better than the other in general, they are just different, and anyone who says there is a best one can only point to specific circumstances or their own personal preference and not many solid objective facts.

There are solid objective facts relevant to specific circumstances. And every dev has specific circumstances, it's useless to talk about fictional generic developers.

My impression is non-Windows environments have an edge in any command line stuff (still no decent terminal or shell in Windows), developing with free tools because there's more stuff available (valgrind, ...), new versions of everything tend to be available faster, they are easier to set up and keep up to date due to package managers, it's easier to move your configurations around from machine to machine - that kind of stuff.

I've been working on Windows quite a bit, doing graphics programming and large-scale application development. I have honestly tried to set things up as well as possible, but even after putting in the work, interaction with Windows just feels tolerable at best. I don't think that is merely subjective. From all sorts of angles - out-of-the-box capability, time spent to ready a new machine for use, the terminal experience, even customizing keyboard layouts, system input field and windowing behavior - OS X has consistently allowed me to do more and in a saner fashion. I'd currently prefer Windows only for graphics work.
 

Slavik81

Member
Would somebody mind running this through matlab?
Code:
in=[0, 0, 0, 1, 0;  1, 1, 1, 1, 0;  0, 0, 1, 1, 0;  0, 0, 1, 1, 0;  0, 0, 0, 1, 0]
out=bwmorph(in,'shrink')
I still need this, btw. I want to see if the Matlab result is the same as the Octave result.

EDIT: I've managed to find a copy.
 
There are solid objective facts relevant to specific circumstances. And every dev has specific circumstances, it's useless to talk about fictional generic developers.

My impression is non-Windows environments have an edge in any command line stuff (still no decent terminal or shell in Windows), developing with free tools because there's more stuff available (valgrind, ...), new versions of everything tend to be available faster, they are easier to set up and keep up to date due to package managers, it's easier to move your configurations around from machine to machine - that kind of stuff.

I've been working on Windows quite a bit, doing graphics programming and large-scale application development. I have honestly tried to set things up as well as possible, but even after putting in the work, interaction with Windows just feels tolerable at best. I don't think that is merely subjective. From all sorts of angles - out-of-the-box capability, time spent to ready a new machine for use, the terminal experience, even customizing keyboard layouts, system input field and windowing behavior - OS X has consistently allowed me to do more and in a saner fashion. I'd currently prefer Windows only for graphics work.

Right, of course there are specific instances, but the point I was making is that specific instances are exactly the opposite of what you need to prove a one-is-better argument.
 

upandaway

Member
So I'm thinking of applying for Google STEP in a few days, and I just finished my first semester and have no CS experience otherwise (a bit in IT support). Does anyone know if I should make something of a resume to upload in the application or should I skip that part? What can I even put on there?

I don't think I'm supposed to put a git on it right
 

Mabef

Banned
It's not a thing in c/c++, but it is in some other languages. Usually functional programming languages. Really depends how complicated the operations are, but one option if they're pretty complicated is to have a matrix of function or base class pointers with a common signature, then index into the matrix and call the function.

Maybe post some more specifics about what you're doing, might help come up with the best solution.
Thanks. I had a pretty simple case of 3 booleans. So I converted the booleans into one binary-like integer, and did a switch statement on that. It just made me wonder, what's the 'big' way of handling this? Mine doesn't scale very well.

...one option if they're pretty complicated is to have a matrix of function or base class pointers with a common signature, then index into the matrix and call the function.
Unfortunately I don't know enough to follow you completely :( but I might get the gist. If I create a matrix and give each test-variable it's own dimension, I can think of the matrix' coordinates as a "case." At each coordinate, I can pointer the code that should occur for that "case." Yeah?
 
So I'm thinking of applying for Google STEP in a few days, and I just finished my first semester and have no CS experience otherwise (a bit in IT support). Does anyone know if I should make something of a resume to upload in the application or should I skip that part? What can I even put on there?

I don't think I'm supposed to put a git on it right

I'd argue you're sabotaging yourself by not putting any repositories on there.
 
So I'm thinking of applying for Google STEP in a few days, and I just finished my first semester and have no CS experience otherwise (a bit in IT support). Does anyone know if I should make something of a resume to upload in the application or should I skip that part? What can I even put on there?

I don't think I'm supposed to put a git on it right
Google has a wall of HR that's hard to get past. Because of that bulky human element, it would really help to get ahold of somebody who has gotten through STEP. Maybe somebody watching this thread on these forums?

I always put any websites with open source code I want people to look at on my resume, that is frequently valuable. I've had entire interviews revolve around open source stuff that I've done, even minor projects that were just for fun.

If you don't have any other CS experience, IT work can be sufficient for now, but do consider dropping it after you pick up a year or two of CS work experience.

A quick resume can't hurt, even if it's got a few extracurriculars that don't seem relevant to programming. Something that can give people an impression of what interests you besides keyboards and screens and things.
 

upandaway

Member
I see, Thanks. Then I can tidy it up a bit then put it there. Am I supposed to write a bit about some of the repos inside the resume or just an address and rely on the git's readmes?

I haven't actually dealt with any big open source stuff, just my own personal projects, so I'm not sure if it's worth anything, but whatever.
 
Thanks. I had a pretty simple case of 3 booleans. So I converted the booleans into one binary-like integer, and did a switch statement on that. It just made me wonder, what's the 'big' way of handling this? Mine doesn't scale very well.
Finite state machines? :) Usually described by a variable that's an enumerated type to describe the various states as more than just a number (re: enumerated types), fed into a switch statement. With one large switch statement per function body. Similar to what you've done, but a bit more readable with the use of enum. If you dig into OS kernel code or Id code, you'll see this all the time.

It would also help to pull the logic executed for each switch case into its own function, which would make the whole switch bit immensely more readable

Unfortunately I don't know enough to follow you completely :( but I might get the gist. If I create a matrix and give each test-variable it's own dimension, I can think of the matrix' coordinates as a "case." At each coordinate, I can pointer the code that should occur for that "case." Yeah?
One straight C way, which is what cpp_is_king was suggesting, would be to have a 2D array populated with function pointers. Then use the enumerated types as indexes since they start at 0 and increment for every additional value like unsigned integers by default. Your first enumerated type (dimension 1) and second enumerated type (dimension 2) can be used to retrieve a function pointer, and you can invoke that function pointer to do what you want. No switch statements necessary.
 
I see, Thanks. Then I can tidy it up a bit then put it there. Am I supposed to write a bit about some of the repos inside the resume or just an address and rely on the git's readmes?

I haven't actually dealt with any big open source stuff, just my own personal projects, so I'm not sure if it's worth anything, but whatever.
Personal projects count! They count quite a bit. Don't feel intimidated or that you must do big OSS blah blah, just showing that you can code things good makes a super big difference and it really matters. Like, it is hard enough as it is to find people who can write anything usable, to begin with.

If you'd like to describe the repos, that would help. At least, it would provide context if you don't have blog posts or READMEs that can explain it to a not-technical person. I guarantee you that the HR people who will be the first to look at this aren't very technical people.
 
Personal projects count! They count quite a bit. Don't feel intimidated or that you must do big OSS blah blah, just showing that you can code things good makes a super big difference and it really matters. Like, it is hard enough as it is to find people who can write anything usable, to begin with.

If you'd like to describe the repos, that would help. At least, it would provide context if you don't have blog posts or READMEs that can explain it to a not-technical person. I guarantee you that the HR people who will be the first to look at this aren't very technical people.

Yup.
 
I just remembered one important point in the linux vs. windows thing. Windows has never had a stable ABI for their implementation of the CRT. In my opinion, this is the number 1 reason open source has never really caught and held back the platform. That is solved in Visual Studio 2015 which is released later this year and should remain solved forever. That will kill -- in my opinion -- the single or second biggest hurdle to adoption of Windows as a development platform.

The other -- portability -- is as much of an advantage as it is a disadvantage. The POSIX api is a really poorly designed API in my opinion, and it feels very antiquated.
 
Thanks. I had a pretty simple case of 3 booleans. So I converted the booleans into one binary-like integer, and did a switch statement on that. It just made me wonder, what's the 'big' way of handling this? Mine doesn't scale very well.


Unfortunately I don't know enough to follow you completely :( but I might get the gist. If I create a matrix and give each test-variable it's own dimension, I can think of the matrix' coordinates as a "case." At each coordinate, I can pointer the code that should occur for that "case." Yeah?

Pretty much yes. If the coordinates are integers that start from 0 and increase in sequence (or if you can come up with a function that maps coordinates to an integer sequence like this), you could use an actual 2d array. Otherwise you can use an std::unordered_map (in c++) or similar associative array in other languages. The key could be an std::pair<Type, Type2> and the value could be either a function pointer such as void (*)(int arg1, int arg2), or a pointer to a class, like Foo*.

The latter is more flexible but requires more care. The idea is that you make a class like this:

Code:
class Foo {
public:
   virtual void Foo(int arg1, int arg2) = 0;
   virtual void Bar(int arg1, int arg2) = 0;
};

Then you can have arbitrarily many classes derive from Foo and implement the methods differently. This is more flexible than storing function pointers because A class can contain state, and You can have many different methods to select from, not just one function pointer.

You could write a calculator like this as follows (I think this will just compile as-is, but I don't have a compiler handy to test it out):

Code:
#include <iostream>
#include <unordered_map>

struct Op {
   virtual void Execute(float arg1, float arg2) = 0;
};

struct Add : public Op {
   virtual float Execute(float arg1, float arg2) { return arg1 + arg2; }
};

struct Sub : public Op {
   virtual float Execute(int arg1, int arg2) { return arg1 - arg2; }
};

struct Mul : public Op {
   virtual float Execute(float arg1, float arg2) { return arg1 * arg2; }
};

struct Div : public Op {
   virtual float Execute(float arg1, float arg2) { return arg1 / arg2; }
};

int main(int argc, char **argv) {
    std::unordered_map<char, Op*> Ops;
    Ops['+'] = new Add();
    Ops['-'] = new Sub();
    Ops['*'] = new Mul();
    Ops['/'] = new Div();

    float arg1 = 0.0f;
    float arg2 = 0.0f;
    char op;
    std::string expr;
    std::cout << "Enter an expression: ";
    std::cin >> expr;
    sscanf("%f %c %f", expr.c_str(), &arg1, &op, &arg2);
    std::cout << "The answer is: " << Ops[op]->Execute(arg1, arg2);
    return 0;
}

Obviously you wouldn't need this design pattern for a trivial calculator, and I've left out a lot of error handling and other types of details here, but you get the idea.
 

Rush_Khan

Member
(C++) Hi. I was wondering whether it was bad practice or not to give a function two different names.

For example, I have a function called move_left(...) which would have the exact same code as a function called move_up(...), so instead of creating a new function with the exact same code, I wrote:

#define move_left move_up

and I did the same thing for move_down and move_right, respectively.

Is this bad practice? I thought it was pretty clever, but my assignment criteria says to avoid bad practices (such as writing everything in the main). It seems to be working in CodeBlocks but I'm not sure if this will work for other compiler programs.
 
(C++) Hi. I was wondering whether it was bad practice or not to give a function two different names.

For example, I have a function called move_left(...) which would have the exact same code as a function called move_up(...), so instead of creating a new function with the exact same code, I wrote:

#define move_left move_up

and I did the same thing for move_down and move_right, respectively.

Is this bad practice? I thought it was pretty clever, but my assignment criteria says to avoid bad practices (such as writing everything in the main). It seems to be working in CodeBlocks but I'm not sure if this will work for other compiler programs.
Why not have a "move" function that takes in as one of its arguments, a custom type called "rk_directionType" defined by an enumerated type that covers all directions, and pass that through as an argument instead? One cleaner alternative.

At some point you'll probably want to do something a little different for the implementations of left/up/down/right, and it makes sense to write code that has interfaces that can accommodate for that change in behavior.


Also, it's generally a bad idea to lean on the C pre-processor for solutions. Like, really bad.

C++ started as a C pre-processor extension before C++ compilers became commonplace. I'd recommend you do any sort of logic that you might want to do in a complex pre-processor macro in a more C++-ish way instead.

Macro abuse is its own circle of hell. Possibly worse than meta-programming abuse. The lack of type checking and ease of scoping bugs make it so.


EDIT: To fill out the "when you might want to do this" case, using a macro to act as a placeholder for a function name makes sense when you want to avoid the overhead of an extra function call, as in without the macro you'd just have one function that does nothing else, but call another function within its implementation.

The same goes for when someone might want to use a macro to define a function. Provided the macro is written such that it's protected against the usual macro scoping issues, you can avoid a function call by inserting a small two or three liner in a macro.

I still don't recommend doing that as a good practice. This is something I've seen in some fast math libraries, and it's really hard to debug code written around macros. Especially because, even if the macros follow all the best practices, the generated assembly tends to be verbose and messy. Something that well written functions can and will protect your code against. And C function calls don't really generate much overhead at all these days.

Besides, the C++ inline keyword avoids the function call for you. The only catch to using it is that C++ compilers can choose to ignore it, if they want.
 

Aureon

Please do not let me serve on a jury. I am actually a crazy person.
Can someone explain to me in a fairly simple manner why many developers believe that development is done better on UNIX systems?

My school teaches all of its classes using Windows and my current internship has me working on Windows (using Visual Studio). This has me thinking that I won't have much UNIX experience by the time I finish school.

Are the tools better there? Do you get any benefits if you are still just working in an IDE? Is it all preference?

I've searched around before but never really found a good answer, so I'd appreciate any info!

Bols down to:

Grep
Apt-get
General shell behavior
General path behavior
General include behavior (Hi, windows-PATH. Hi, Registry. I hate you both.)
System Stability
Collation standards compatibility (What do you mean, ISO 1252.. how do i use utf-8? -
IF YOU EVER UTTER THOSE WORDS, KNOW THAT YOU ENTERED A WELL OF TEARS.
 

Mabef

Banned
Finite state machines? :) Usually described by a variable that's an enumerated type to describe the various states as more than just a number (re: enumerated types), fed into a switch statement. With one large switch statement per function body. Similar to what you've done, but a bit more readable with the use of enum. If you dig into OS kernel code or Id code, you'll see this all the time.

It would also help to pull the logic executed for each switch case into its own function, which would make the whole switch bit immensely more readable

One straight C way, which is what cpp_is_king was suggesting, would be to have a 2D array populated with function pointers. Then use the enumerated types as indexes since they start at 0 and increment for every additional value like unsigned integers by default. Your first enumerated type (dimension 1) and second enumerated type (dimension 2) can be used to retrieve a function pointer, and you can invoke that function pointer to do what you want. No switch statements necessary.
Pretty much yes. If the coordinates are integers that start from 0 and increase in sequence (or if you can come up with a function that maps coordinates to an integer sequence like this), you could use an actual 2d array. Otherwise you can use an std::unordered_map (in c++) or similar associative array in other languages. The key could be an std::pair<Type, Type2> and the value could be either a function pointer such as void (*)(int arg1, int arg2), or a pointer to a class, like Foo*.

[...]

Thanks you two. I'm going to hold onto these for reference as I dive into this, see if I can actually implement one of these methods.
 
Ok, I've been using my Windows machine for more coding work and want to better understand it. All I need to do work is Vim and some proficiency with the shell. I already have gVim, so what are some resources for learning the Windows shell?

Unix guys don't skewer me. I'm running three Arch Linux machines so I know what I'm getting into.
 
You can burn a live USB in little time and boot into a no-consequences live session easily. If that turns out not to be so easy (secure boot or some other garbage), or your machine has a ton of RAM, a virtual machine is even easier.

Get your feet wet :)

Any recommendations on a distro? I've only ever used Lubuntu on a very low end laptop. My desktop is pretty beefy and could handle whatever, though. I don't think I've ever set up a VM either so that could be a fun experience :)

Should I just go with Ubuntu? Elementary OS looked interesting to me as well.
 
Bols down to:

Grep
Apt-get
General shell behavior
General path behavior
General include behavior (Hi, windows-PATH. Hi, Registry. I hate you both.)
System Stability
Collation standards compatibility (What do you mean, ISO 1252.. how do i use utf-8? -
IF YOU EVER UTTER THOSE WORDS, KNOW THAT YOU ENTERED A WELL OF TEARS.

* Windows has grep, it's built into Visual Studio. I actually like it better than grep.
* apt-get. Agree with you there. Earlier I mentioned that VS2015 and beyond have forward-compatible CRT ABIs. I think this will mostly solve the apt-get problem, and make a similar package manager for Windows possible.
* General shell behavior. cmd sucks ass, have to agree with you there. OTOH, I rarely have a need for complex shell behavior. I think this one is personal preference, almost anything someone can do in a shell I can do just as fast without a shell.
* General path behavior - how so? If you mean having backslashes in paths, I don't think it's that big ofa deal. I would actually argue that allowing escapable characters in filenames is a flaw in unixy filesystems
* General include behavior - Not sure what you mean. Doesn't Unix also have a PATH environment variable? And how does that relate to #includes?
 
(C++) Hi. I was wondering whether it was bad practice or not to give a function two different names.

For example, I have a function called move_left(...) which would have the exact same code as a function called move_up(...), so instead of creating a new function with the exact same code, I wrote:

#define move_left move_up

and I did the same thing for move_down and move_right, respectively.

Is this bad practice? I thought it was pretty clever, but my assignment criteria says to avoid bad practices (such as writing everything in the main). It seems to be working in CodeBlocks but I'm not sure if this will work for other compiler programs.

I certainly wouldn't do it this way. I'd probably do

enum kMoveDirection
{
kMoveDirection_Up,
kMoveDirection_Down,
kMoveDirection_Left,
kMoveDirection,Right,
};

void moveInADirection(kMoveDirection theDirection)
{
//Your move code goes here
}
 
I certainly wouldn't do it this way. I'd probably do

enum kMoveDirection
{
kMoveDirection_Up,
kMoveDirection_Down,
kMoveDirection_Left,
kMoveDirection,Right,
};

void moveInADirection(kMoveDirection theDirection)
{
//Your move code goes here
}

C++11 is your friend, and will help you get rid of ugly enum names :)

Code:
enum class kMoveDirection
{
  Up,
  Down,
  Left,
  Right
};

void move(kMoveDirection dir)
{
   if (dir == kMoveDirection::Up)
   {
   }
   ...
}
 
Dunno if you have experience with Windows Server in a modern context but they are really not anymore stable than Windows anymore. They are all pretty much the same at this point in time in terms of stability. I would say that Windows Server even has better built in tools at this point than any other platform as well, particularly from a GUI point of view. Command line with powershell is about the same. I think it's a toss up between them at this point.
Yes. No.

At the kernel level Microsoft has been left behind, and my guess is that within a decade or so they'll do what Apple did and ditch their own kernel in favor of a brand of BSD -- or whatever the unix-derived hotness is at that point.

Also, while powershell is ok, it's pretty laughable to put it on par with bash & friends. There's a reason cygwin still exists, for all its warts.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Enum classes
crazy.gif
 
Yes. No.

At the kernel level Microsoft has been left behind, and my guess is that within a decade or so they'll do what Apple did and ditch their own kernel in favor of a brand of BSD -- or whatever the unix-derived hotness is at that point.
It wouldn't be an easy transition.

OS X had its share of teething problems, mostly around problems with the Mach microkernel and large scale revisions to how the NeXT-inherited XNU handled drivers and some of the BSD derived bits. There is a good overview of XNU out there, if you're curious. The former head of Apple's kernel team starting in 2001 was one of the co-founders of FreeBSD, Jordan Hubbard, and helped to add a fair amount of FreeBSD's stable infrastructure into XNU while attempting to contribute some of it back. Meanwhile, the decision to stick with Mach for OS X wasn't without criticism behind closed doors.

Even then, I remember issues putting computers to sleep that lasted until 10.2/10.3. DVD playback wasn't even a part of OS X 10.0 until the first major update, 10.1, which shipped under Hubbard's watch.

Things like that couldn't happen unless MS was trapped in an adapt or die situation. I think Windows has enough good in it, barring the many, many layers of backwards compatibility bits on top, that the kernel isn't a lost cause.


Microsoft does have a clean subset of the Windows kernel, in MinWin.

Unfortunately they, like Apple, tend to be caught up in shipping more marketable, flashy features that might generate word of mouth or appeal to magazine critics. Until the day comes that stability's a DOS 4.0/System 7.5/Me/Vista/iOS 8 sized problem.
 
It wouldn't be an easy transition.

OS X had its share of teething problems, mostly around problems with the Mach microkernel and large scale revisions to how the NeXT-inherited XNU handled drivers and some of the BSD derived bits. There is a good overview of XNU out there, if you're curious. The former head of Apple's kernel team starting in 2001 was one of the co-founders of FreeBSD, Jordan Hubbard, and helped to add a fair amount of FreeBSD's stable infrastructure into XNU while attempting to contribute some of it back. Meanwhile, the decision to stick with Mach for OS X wasn't without criticism behind closed doors.

Even then, I remember issues putting computers to sleep that lasted until 10.2/10.3. DVD playback wasn't even a part of OS X 10.0 until the first major update, 10.1, which shipped under Hubbard's watch.

Things like that couldn't happen unless MS was trapped in an adapt or die situation. I think Windows has enough good in it, barring the many, many layers of backwards compatibility bits on top, that the kernel isn't a lost cause.


Microsoft does have a clean subset of the Windows kernel, in MinWin.

Unfortunately they, like Apple, tend to be caught up in shipping more marketable, flashy features that might generate word of mouth or appeal to magazine critics. Until the day comes that stability's a DOS 4.0/System 7.5/Me/Vista/iOS 8 sized problem.
They're talented dudes and dudettes, they could pull off a switchover. The fact is though, talented or not, one company can't keep up on the R&D front with other companies and academia pouring millions and millions of man hours into Linux and BSD.

Fact of the matter, if Apple could do it, Microsoft definitely can. The technical legacy they have in-house puts Apple to shame.
 
They're talented dudes and dudettes, they could pull off a switchover. The fact is though, talented or not, one company can't keep up on the R&D front with other companies and academia pouring millions and millions of man hours into Linux and BSD.

Fact of the matter, if Apple could do it, Microsoft definitely can. The technical legacy they have in-house puts Apple to shame.

I'm not sure i agree with this. Most of the hours being put in have nothing to do with producing a consumer OS with mass appeal. Furthermore, there are literally billions of lines of code written against the Windows API, the design of which is heavily influenced by the set of features available from the kernel, which is not the same set of features offered by other platforms. This is why software written against MinGW or run under cygwin never really works quite right, or doesn't feel like a native windows program.

What you're describing isn't a "switchover", it's a complete rewrite of the OS, and all tools and software for the OS. It has zero chance of happening. A much more likely scenario in my opinion is that they open source the OS (and I actually believe this is not as crazy and unlikely as many people think, i think we'll see it happen).

People have been saying Windows won't be able to keep up for how many years? IMO it'll be the dominant consumer desktop OS until desktop OSes are no longer a thing (which admittedly may not be more than another 10-15 years)
 
I'm not sure i agree with this. Most of the hours being put in have nothing to do with producing a consumer OS with mass appeal. Furthermore, there are literally billions of lines of code written against the Windows API, the design of which is heavily influenced by the set of features available from the kernel, which is not the same set of features offered by other platforms. This is why software written against MinGW or run under cygwin never really works quite right, or doesn't feel like a native windows program.

What you're describing isn't a "switchover", it's a complete rewrite of the OS, and all tools and software for the OS. It has zero chance of happening.

People have been saying Windows won't be able to keep up for how many years? IMO it'll be the dominant consumer desktop OS until desktop OSes are no longer a thing (which admittedly may not be more than another 10-15 years)

A much more likely scenario in my opinion is that they open source the OS
Well, they're been right -- Windows has been falling behind for a while. To this point Microsoft has been able to stay close enough, and leverage Windows as the MS Office delivery platform.

re: them open sourcing it, I think that's possible, but unlikely. If they go the open source route (and I think they will), better to build their OS around a modern kernel with a business-friendly license (so, BSD) than reinvent the wheel from scratch. And what a wheel it is!

Yes, the API backwards compatibility would be rough. But they wade into that pool every time they do a major version release anyway. They know what they're doing.
 
I'd like to see parts of the Windows kernel open sourced, too. DevDiv has been friendly to open sourcing and reaching out to alternative platforms like the Mono project through the .NET Foundation.

Roslyn, the open source C#/VB compiler platform, is part of something I hope Microsoft at large gets into. The QuickVB project that builds on it is adorable.

I am liking this trend towards building on open source, liberally licensed compiler infrastructure. The Clang project flat out gives away the same stable interfaces Xcode uses for AST walking, live issues and basically anything that app wants to do with a compiler in the form of libclang (PDF slides, QuickTime video). The best examples I've seen for that come from a Japanese developer who writes books on Clang and other LLVM infrastructure with other members of his circle.

Yeah, I'm excited to see what else might come from those projects. Especially as Clang on Windows continues shaping up.
 
Well, they're been right -- Windows has been falling behind for a while. To this point Microsoft has been able to stay close enough, and leverage Windows as the MS Office delivery platform.

re: them open sourcing it, I think that's possible, but unlikely. If they go the open source route (and I think they will), better to build their OS around a modern kernel with a business-friendly license (so, BSD) than reinvent the wheel from scratch. And what a wheel it is!

Yes, the API backwards compatibility would be rough. But they wade into that pool every time they do a major version release anyway. They know what they're doing.

Desktop platforms as a whole have been declining, but I don't think that the ratio of windows desktops to non windows desktops has significantly shifted.

As for reinventing the wheel, that's exactly what changing the kernel would do, because they would have to reinvent every single other thing except the kernel, which I would guess is about 95% of windows.

Re: backwards compat, it is very rare they introduce breaking api changes. In fact, I don't recall it happening ever. It wouldn't just be rough, it would kill almost every nontrivial piece of software out there not written in a managed language. Open sourcing their own os under a BSD like license would light a fire under the ass of the whole world, writing windows on top of BSD would just burn the whole world down. Just my 2c
 

Two Words

Member
I'm running into a strange issue with a C++ project. The project is the "Game of Life" game.

To simply explain it, there is a grid that is filled with either * or a space. A * represents a bacteria cell. A space represents an empty space. If a cell has less than 2 neighbors or more than 3 neighbors, the cell dies. If an empty spot has exactly 3 neighbors, a cell is born there. The program prints every generation.

I am certain that I am simulating everything correctly, but for some reason I get these thick white lines in the console. The white lines do not show up until several generations are run. I save the text into a file, and when I look at the file in hexidecimal, I am not seeing anything that would represent those thick white lines. I'm only seeing 20, space, 0d, 0a (new line and return), and 2A (* character). Here is what I am talking about.


Dynamic arrays are used for this. A dynamic array is created that is large enough to read from a file, then the generations are operated on the array.





If anybody wants to see the code, just let me know. I don't want to spam 450 lines here.
 
Desktop platforms as a whole have been declining, but I don't think that the ratio of windows desktops to non windows desktops has significantly shifted.

As for reinventing the wheel, that's exactly what changing the kernel would do, because they would have to reinvent every single other thing except the kernel, which I would guess is about 95% of windows.

Re: backwards compat, it is very rare they introduce breaking api changes. In fact, I don't recall it happening ever. It wouldn't just be rough, it would kill almost every nontrivial piece of software out there not written in a managed language. Open sourcing their own os under a BSD like license would light a fire under the ass of the whole world, writing windows on top of BSD would just burn the whole world down. Just my 2c
See, I don't think so. They've already been rewriting the kernel (that was where the "7" in windows 7 was borrowed from), and presumably investing some resources in API compatibility in the process. So we'd be talking about something like winelib, except instead of a hacked together black box reverse engineer, a properly designed compatibility layer.

MS never breaks backwards compatibility. But they wouldn't need to -- except for finally jettisoning some of their truly ancient nonsense.

Again, they have to do this regardless, whether for a from-scratch kernel or specialized BSD. Backwards compatibility doesn't mean "keep around old code forever". The reinvention of the wheel would be trying to come up with better core OS semantics than the unixes have been hammering away at an incredible pace for the last 15 years.
 
I'm running into a strange issue with a C++ project. The project is the "Game of Life" game.

To simply explain it, there is a grid that is filled with either * or a space. A * represents a bacteria cell. A space represents an empty space. If a cell has less than 2 neighbors or more than 3 neighbors, the cell dies. If an empty spot has exactly 3 neighbors, a cell is born there. The program prints every generation.

I am certain that I am simulating everything correctly, but for some reason I get these thick white lines in the console. The white lines do not show up until several generations are run. I save the text into a file, and when I look at the file in hexidecimal, I am not seeing anything that would represent those thick white lines. I'm only seeing 20, space, 0d, 0a (new line and return), and 2A (* character). Here is what I am talking about.



Dynamic arrays are used for this. A dynamic array is created that is large enough to read from a file, then the generations are operated on the array.





If anybody wants to see the code, just let me know. I don't want to spam 450 lines here.

Does it always happen on the same generation if you start with the same initial configuration?
 

Two Words

Member
Does it always happen on the same generation if you start with the same initial configuration?

It appears so. However, this does not hold when running the program again. So if I do this loop:

simulate the array
console output the array
save the array

Then the last generation will always be saved into the file.

When I run it again, it will be starting from the last generation the last time the program was run. But it again takes 17 generations to make the white lines appear, even though they were supposedly just there on the end of the first run.
 

Two Words

Member
I think I figured it out. It looks like it might have just been because of my console settings. I changed the console width from 80 characters to 200, which is why the console is so wide. When I return the width to a number much closer to 80, the white lines no longer appear.
 
Hey. I found this notes for c++, java and few other topics along with loads of examples. Its very nice. Especially java, since there arnt many good free web sourses to refer or even learn java.

It covers everything concisely, so it may not be ideal for starters but for revision or to enhance knowledge of basics in short time this looks ideal.

Here is the link

http://www3.NTU.edu.SG/home/ehchua/programming/index.html

I believe its a professors notes, explains why its so refined.
 

Water

Member
Hey. I found this notes for c++, java and few other topics along with loads of examples. Its very nice. Especially java, since there arnt many good free web sourses to refer or even learn java.

http://www3.NTU.edu.SG/home/ehchua/programming/index.html
Some parts of the C++ section are surprisingly good - good enough that I'm bookmarking it for inspiration for my own teaching materials, or for pointing out specific sections as a reference to my students - but the overall structure and the order things appear in is fucked in the classic way that a lot of C++ teaching is. It's a more suitable outline for a C course than for a C++ course. It's not modern C++ either, but that's the least of the shortcomings of the material.

Since we're on the topic, the Koenig & Moo "Accelerated C++" book is the best template I've seen for teaching/learning C++ to someone who already has a little bit of programming skill in another language. Unfortunately it also doesn't use modern C++, having been written in 2000. Have any superlative beginner C++ books appeared which use C++11 or better? This is of interest to me since I'll probably end up teaching a bit of C++ this Fall.
 

Mr.Mike

Member
Some parts of the C++ section are surprisingly good - good enough that I'm bookmarking it for inspiration for my own teaching materials, or for pointing out specific sections as a reference to my students - but the overall structure and the order things appear in is fucked in the classic way that a lot of C++ teaching is. It's a more suitable outline for a C course than for a C++ course. It's not modern C++ either, but that's the least of the shortcomings of the material.

Since we're on the topic, the Koenig & Moo "Accelerated C++" book is the best template I've seen for teaching/learning C++ to someone who already has a little bit of programming skill in another language. Unfortunately it also doesn't use modern C++, having been written in 2000. Have any superlative beginner C++ books appeared which use C++11 or better? This is of interest to me since I'll probably end up teaching a bit of C++ this Fall.

I've been reading through C++ Primer (5th Edition) which seems to have been cowritten by Moo as well. It teaches modern C++ and makes a big point of teaching good C++ and not C. I've found it to be pretty good so far, not that I've made it terribly far into it yet. It does seem like it's more intended for someone who already knows a bit of programming though.

Most texts present C++ in the order in which it evolved. They teach the C subset of C++ first, and present the more abstract features of C++ as advanced topics at the end of the book. There are two problems with this approach: Readers can get bogged down in the details inherent in low-level programming and give up in frustration. Those who do press on learn bad habits that they must unlearn later.

We take the opposite approach: Right from the start, we use the features that let programmers ignore the details inherent in low-level programming. For example, we introduce and use the library string and vector types along with the built-in arithmetic and array types. Programs that use these library types are easier to write, easier to understand, and much less error-prone.

Be wary of a book called C++ Primer Plus (6th) edition, which isn't anywhere near as good from what I've read, and isn't the same book or even the next edition of the book.
 

Water

Member
So what's better C++ or C

Nobody who knows both languages will elect to use C when C++ compilers and tools are available. Most pieces of C code are also valid C++ code, but C++ offers you additional ways of doing things. Even with problems where C++ code naturally stays very close to C code, there's likely to be at least something where C++ facilities allow you to something more neatly, do something in a safer way, or even save performance compared to sticking to C.
 
For c++ I refer learncpp.com. Their material is awesome. I prefer to learn languages trough websites like that rather than reading books.

For java I couldn't find a neat website like learncpp.com but now I found. :)
 

Water

Member
I've been reading through C++ Primer (5th Edition) which seems to have been cowritten by Moo as well. It teaches modern C++ and makes a big point of teaching good C++ and not C. I've found it to be pretty good so far, not that I've made it terribly far into it yet. It does seem like it's more intended for someone who already knows a bit of programming though.
The Lippman book is decent - maybe Moo cowriting has something to do with that. But despite what they say about teaching C++, they still fall halfway into the same trap as most of the others. What really distinguishes the approach used by Koenig, and what I personally stick to when teaching, is that pointers are not seen - not introduced at all - before the student has the skills to do things that actually require using pointers and dynamic memory allocation. It results in a fantastic teaching progression where even templates actually come before pointers. Arrays aren't introduced early either, std::vector is used instead.

A C++ course that has ran at my university for the last couple of years (now sadly cancelled) insisted on student projects containing zero naked pointers / new / delete unless there was a very good justification for them to be there. Its prerequisite was a pure C programming course, so there was a big emphasis on not carrying over any bad habits from C.
 
The Lippman book is decent - maybe Moo cowriting has something to do with that. But despite what they say about teaching C++, they still fall halfway into the same trap as most of the others. What really distinguishes the approach used by Koenig, and what I personally stick to when teaching, is that pointers are not seen - not introduced at all - before the student has the skills to do things that actually require using pointers and dynamic memory allocation. It results in a fantastic teaching progression where even templates actually come before pointers. Arrays aren't introduced early either, std::vector is used insterd.

That's a really nice way to teach. Teaching printers initially would be overwhelming and its inefficient too because, in initial stages the stuff you can do with pointers and dynamic allocation can be done normally. Pointers only make it more complicated.
 
The Lippman book is decent - maybe Moo cowriting has something to do with that. But despite what they say about teaching C++, they still fall halfway into the same trap as most of the others. What really distinguishes the approach used by Koenig, and what I personally stick to when teaching, is that pointers are not seen - not introduced at all - before the student has the skills to do things that actually require using pointers and dynamic memory allocation. It results in a fantastic teaching progression where even templates actually come before pointers. Arrays aren't introduced early either, std::vector is used instead.

A C++ course that has ran at my university for the last couple of years (now sadly cancelled) insisted on student projects containing zero naked pointers / new / delete unless there was a very good justification for them to be there. Its prerequisite was a pure C programming course, so there was a big emphasis on not carrying over any bad habits from C.

As a thought experiment, I wonder what would happen if you took the reverse approach. Only teach pointers right from day 1. You want to read two numbers and add them? Well, make 2 int*'s, new them both, read their values, print the result, then delete them. You wouldn't have to explain what the news and deletes were for, similar to how you don't really explain why you have to type int main(int argc, char **argv) to someone starting out. Just say that's how you make variables. Then, 3 months down the line, say "oh, by the way. Here's a neat little trick. If you don't use the *, you can skip all that new / delete stuff".

Reminds me of the way I was taught calculus. You learn derivatives the "hard way" with differential forms, then they teach "oh yea, you can actually just subtract 1 from the exponent and multiply by the coefficient".
 
Top Bottom