• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Nesotenso

Member
Learning Make is definitely the hard way. Honestly im going to do you a favor and just not answer your question. Learn a useful cross platform build system like cmake or gyp.

I have used cmake in the past to build some opencv libraries and include them in an ide like codeblocks. I am trying to get better at C/C++, Python and perl and following the online tutorial for "Learn C the hard way" was the one I went with. Have some C experience,so thought why not follow this one. The dude who wrote tells in the beginning to avoid ides.


How would I use cmake to generate makefiles on a linux platform?
 
Make really isn't difficult. The Makefile is just a list of targets and dependencies. For GNU Make, here is the documentation:

http://www.gnu.org/software/make/manual/make.html

If your Makefile gets complicated, you're probably doing it wrong.

People may get the wrong idea about make because autoconf/automake makes a horrible mess in the Makefile for multi-platform builds. Building for a known platform or a known subset is much easier, though, and in any case a good working knowledge of make is worth acquiring. It has sensible defaults and it's very easy to learn.
 

Nesotenso

Member
Make really isn't difficult. The Makefile is just a list of targets and dependencies. For GNU Make, here is the documentation:

http://www.gnu.org/software/make/manual/make.html

If your Makefile gets complicated, you're probably doing it wrong.

People may get the wrong idea about make because autoconf/automake makes a horrible mess in the Makefile for multi-platform builds. Building for a known platform or a known subset is much easier, though, and in any case a good working knowledge of make is worth acquiring. It has sensible defaults and it's very easy to learn.

thanks for the link!
 
Make really isn't difficult. The Makefile is just a list of targets and dependencies. For GNU Make, here is the documentation:

http://www.gnu.org/software/make/manual/make.html

If your Makefile gets complicated, you're probably doing it wrong.

People may get the wrong idea about make because autoconf/automake makes a horrible mess in the Makefile for multi-platform builds. Building for a known platform or a known subset is much easier, though, and in any case a good working knowledge of make is worth acquiring. It has sensible defaults and it's very easy to learn.

It also borders on unusable for large enough products that are cross platform (especially if Windows is one of the platforms)
 
I've been thinking about pursuing programming as a new career. I currently have a BA in a completely unrelated field (history). I've been looking at my local community college's offerings and two caught my eye: an AS in computer programming and they also have a certificate for "Smartphone App Developer." The certificate is fewer courses, but I think a lot of my classes would be transferable anyway thus eliminating a lot of the core requirements.

For the AS the courses I'd have to take:
CSC 108 Introduction to Programming
Major Requirements (18-20 Credits)
CSC 233 Database Development I
CSC 234 Database Development II
CST 255 XML for the WWW
2 Semester Programming Sequence

Major Electives (9-12 Credits)
Any CSC or CST or MAT 200+

For the certificate:
CSC 108 Introduction to Programming
CSC 226 Object Oriented Programming Elective
CSC 262 Programming Mobile Devices I
CSC 263 Programming Mobile Devices II

Do you think one of these would open up more opportunities than the other? Or am I better off going out on my own and learning independently? Money is a concern, but it's community college and not outrageously expensive.
 
It also borders on unusable for large enough products that are cross platform (especially if Windows is one of the platforms)

Yes, I'd go along with that. Many open source developers struggle along with the heinous mess of autoconf and automake because they really _want_ a truly portable end product. But that's a dark art and really not needed for many less ambitious projects.

I do recall that one large project, Chicken Scheme, had a release manager who used cmake. When he left the project the project leader who didn't understand cmake just went back to plain make. Things were suddenly much easier to build.
 
Yes, I'd go along with that. Many open source developers struggle along with the heinous mess of autoconf and automake because they really _want_ a truly portable end product. But that's a dark art and really not needed for many less ambitious projects.

I do recall that one large project, Chicken Scheme, had a release manager who used cmake. When he left the project the project leader who didn't understand cmake just went back to plain make. Things were suddenly much easier to build.

True it's not needed, but at the same time nobody wants to learn a build system. Shit sucks. So if you're only going to learn 1, make it one that hides as many platform specific details as possible. Make/autoconf even is unusable for *single* platform development on windows. So if you ever plan to develop on windows, you're wasting your time with it.

Since build systems are so shitty to learn, Personally I would just use an IDE until such time i was forced to modify someone else's build system, then learn that.
 
Make/autoconf even is unusable for *single* platform development on windows. So if you ever plan to develop on windows, you're wasting your time with it.

Conceivably everything you say here is true. My comments probably lose validity if the developer cannot assume a Posix environment will be present at build time.

Autoconf and everything are hideous, but just plain old make is the bees' knees within Posix.
 

cyborg009

Banned
Has anyone worked with android material design? I was thinking about looking into it but it feels like something that would be pointless at its current state.
 
Conceivably everything you say here is true. My comments probably lose validity if the developer cannot assume a Posix environment will be present at build time.

Autoconf and everything are hideous, but just plain old make is the bees' knees within Posix.

Yea, I'm primarily a Windows programmer, and right now I'm in the process of porting a fairly massive project from an environment where Posix was assumed to Windows. Make is one of the things I'm having to deal with, and it's a huge pain in the ass.

there's things like Cygwin and MinGW so you can pretend you're on Posix when you're not, but those are the tools of the devil in my opinion. I know achieving platform independence is hard and all, but part of the reason Windows gets a bad rep from non Windows programmers is because they (understandably) don't have the time or knowledge to do a native Windows port, so they stick all these posix layers in between. At the API level (MinGW and/or relying on the CRT for portability instead of using the native Windows API), at the OS environment level (Cygwin), at the build system level (Make, etc), and then everything ends up performing / behaving like shit which then perpetuates the already prevalent dislike for all things Windows.

Anyway, whenever this topic comes up I have to rant so forgive me :)
 
Anyway, whenever this topic comes up I have to rant so forgive me :)

I understand and share your feelings here. I've worked on pure Windows projects and found the Microsoft tools great to use. Cross-platform is nasty because there's the Microsoft way and there's everybody else's way. Mingw is not for the faint hearted!
 
They're section names. What a section name means depends on the executable file format and the OS loader. Since you're doing dreamcast assembly, you would need to check the Dreamcast spec to know what the nonstandard sections like .little are used for.

Sorry for the delayed response, but thank you for this reply. I was able to find some MSDN and GNU documentation pertaining to the SH-4 directives.
 
I understand and share your feelings here. I've worked on pure Windows projects and found the Microsoft tools great to use. Cross-platform is nasty because there's the Microsoft way and there's everybody else's way. Mingw is not for the faint hearted!

True for the most part, but at least the build system thing is a solved problem with like cmake or gyp. And shell scripts are portable by not using them, and using python instead. All that's left is the toolchain and the API. Clang is like 95% complete on Windows, so that just leaves the API :). And although clearly different from posix, the Windows API is honestly very nice
 
Recruiter email said:
Hope you are doing well.

We have an Urgent Job Opportunity “Cassandra platform engineer” In Minneapolis, MN. If you are interested with this Permanent/full time opportunity, then please send me your updated word format resume, earliest availability ASAP. Please feel free to contact me or reply me if need more information about this opportunity.
Cassandra platform engineer

Location:- Minneapolis, MN

Relevant Experience Required:- 5 years experience in Cassandra

Must Have Skills:-

· 5 years experience in Cassandra
· Apache tomcat
· Linux Shell
Roles & Responsibilities:-

· Cassandra Database administration

· Support the cassandra db and server it is installed

· Triage and fix tomcat related issues in the Cassandra platform

Cassandra wasn't even a top level Apache project until Feb 2010, which is less than five years ago. It didn't even get a 1.0 release until October 2011. Unless you were one of the people who developed the platform (or worked with Facebooks version) it's almost literally impossible to have 5 years experience in Cassandra. I also don't know how you "Triage and fix tomcat related issues in the Cassandra platform" because tomcat and cassandra are two completely different things.
 
Cassandra wasn't even a top level Apache project until Feb 2010, which is less than five years ago. It didn't even get a 1.0 release until October 2011. Unless you were one of the people who developed the platform (or worked with Facebooks version) it's almost literally impossible to have 5 years experience in Cassandra. I also don't know how you "Triage and fix tomcat related issues in the Cassandra platform" because tomcat and cassandra are two completely different things.

Back around 2002-2004 I remember getting recruiters telling me they were looking for people with 5-7 years of C# experience.
 
Back around 2002-2004 I remember getting recruiters telling me they were looking for people with 5-7 years of C# experience.

Ha yeah I was just hitting college around then and I don't even think they offered a course on it yet. I first remember hearing about it as more than a passing conversation in 2005 about two years before I graduated.
 

Onemic

Member
Back around 2002-2004 I remember getting recruiters telling me they were looking for people with 5-7 years of C# experience.

So what do you do when job applications have such steep requirements? It seems like 90% of applications are constructed the way that recruiter application was
 
Cassandra wasn't even a top level Apache project until Feb 2010, which is less than five years ago. It didn't even get a 1.0 release until October 2011. Unless you were one of the people who developed the platform (or worked with Facebooks version) it's almost literally impossible to have 5 years experience in Cassandra. I also don't know how you "Triage and fix tomcat related issues in the Cassandra platform" because tomcat and cassandra are two completely different things.

"Looking for programmers with 5+ years of Apple Swift Experience!"

So what do you do when job applications have such steep requirements? It seems like 90% of applications are constructed the way that recruiter application was

There's two ways to look at it, either the person writing the ad is a moron and has no idea what anything means and the company is likely run by morons.

Or someone is trying to weed out people with little to no experience of any kind.
 
So what do you do when job applications have such steep requirements? It seems like 90% of applications are constructed the way that recruiter application was

Job reqs are written such that no reasonable person could ever meet them. Always, always assume that if you are comfortable with at least a few of the skills they "require", you will have a pretty good shot.

I don't know why they do that, but in my 15+ years of experience, I've learned to never take listed job recs seriously. If the job sounded interesting and if it's something like I felt like I could learn, I would apply. The only exception to this is when the recs add additional wording to emphasize that you will not be considered unless X, Y, and Z. They're serious about those.

Example If you have even a little bit of experience with image processing and H.264, and you know C++, you should feel comfortable applying for this job. Ignore everything else.
 
Another assembly question. When you have a subroutine returning to the main routine, how do you determine which registers the information from the subroutine are in?

AKA, in C I can return a piece of data with a specific type. In assembly that data changes registers when you return. Where is it? Or at least, where should I be looking in the CPU documentation to find out?
 

tokkun

Member
Another assembly question. When you have a subroutine returning to the main routine, how do you determine which registers the information from the subroutine are in?

AKA, in C I can return a piece of data with a specific type. In assembly that data changes registers when you return. Where is it? Or at least, where should I be looking in the CPU documentation to find out?

It's usually up to the author of the subroutine rather than being standardized.
 
Another assembly question. When you have a subroutine returning to the main routine, how do you determine which registers the information from the subroutine are in?

AKA, in C I can return a piece of data with a specific type. In assembly that data changes registers when you return. Where is it? Or at least, where should I be looking in the CPU documentation to find out?

The only reason you know that in C is because C compilers implement what's known as "calling conventions" which impose requirements on how they generate code. If a particular calling convention says that the return value will be in EAX, it's only because the compiler made sure to put it there before inserting a RET instruction.

In straight assembly, it's up to you to use whatever calling convention you choose, including no calling convention at all. In which case, the information from the subroutine will be in whatever register it happens to be in.

If you're asking about a subroutine that you wrote yourself, then you control the subroutine so therefore you control where it writes its output values and return values to. If it's a subroutine someone else wrote, you need to check its documentation to see what calling convention it uses, or where it puts its outputs.

The two most common calling conventions are stdcall and cdecl. You can google them to see what rules they impose on a subroutine. If you decide to go with that, you will need to implement your functions and your callsites accordingly.
 
The only reason you know that in C is because C compilers implement what's known as "calling conventions" which impose requirements on how they generate code. If a particular calling convention says that the return value will be in EAX, it's only because the compiler made sure to put it there before inserting a RET instruction.

In straight assembly, it's up to you to use whatever calling convention you choose, including no calling convention at all. In which case, the information from the subroutine will be in whatever register it happens to be in.

If you're asking about a subroutine that you wrote yourself, then you control the subroutine so therefore you control where it writes its output values and return values to. If it's a subroutine someone else wrote, you need to check its documentation to see what calling convention it uses, or where it puts its outputs.

The two most common calling conventions are stdcall and cdecl. You can google them to see what rules they impose on a subroutine. If you decide to go with that, you will need to implement your functions and your callsites accordingly.

Thank you once again. You've been very helpful!
 

Marcus

Member
Anyone here familiar with MTD partitions. I currently need to revise a very small snippet of old code and I am a little confused on how to go about doing this. This questions is a bit hardware related, but essentially the product that we were using has 1 Gigabit of flash memory, but it will be upgraded to have 4 Gigabits. The current code is this:

static struct mtd_partition partition_info[] = {
{ .name = "SBL",
.offset = 0,
.size = SZ_1M},
{ .name = "u-boot",
.offset = SZ_1M,
.size = SZ_1M },
{ .name = "Kernel",
.offset = 2 * SZ_1M,
.size = 32 * SZ_1M},
{ .name = "JFFS2",
.offset = 34 * SZ_1M,
.size = 94 * SZ_1M},
};

So we have SBL (secondary boot loader) that is 1 MB, u-boot at 1 MB, kernel at 32 MB, and JFFS2 (journaling flash file system) at 94 MB. This totals to 128 Megabytes which is ~= 1 Gigabit.

My question is how exactly would I modify this code snippet for 4 Gigabits? I'm guessing that the kernel or JFFS2 must be increased.
 
Anyone here familiar with MTD partitions. I currently need to revise a very small snippet of old code and I am a little confused on how to go about doing this. This questions is a bit hardware related, but essentially the product that we were using has 1 Gigabit of flash memory, but it will be upgraded to have 4 Gigabits. The current code is this:



So we have SBL (secondary boot loader) that is 1 MB, u-boot at 1 MB, kernel at 32 MB, and JFFS2 (journaling flash file system) at 94 MB. This totals to 128 Megabytes which is ~= 1 Gigabit.

My question is how exactly would I modify this code snippet for 4 Gigabits? I'm guessing that the kernel or JFFS2 must be increased.

Just a guess, but it's probably up to you. It already runs with the current values, so as ling as you don't decrease them you're probably fine. In an abundance of caution, id try to keep kernel memory a power of 2, so maybe try 64MB for the kernel and dump everything else into JFFS2. Make sure you fix up the offsets to be correct.

Again though, this is just a guess.
 

Marcus

Member
Just a guess, but it's probably up to you. It already runs with the current values, so as ling as you don't decrease them you're probably fine. In an abundance of caution, id try to keep kernel memory a power of 2, so maybe try 64MB for the kernel and dump everything else into JFFS2. Make sure you fix up the offsets to be correct.

Again though, this is just a guess.

hmm.. i was thinking about just increasing the JFFS2 and keeping everything else the same.
 
Another assembly question. When you have a subroutine returning to the main routine, how do you determine which registers the information from the subroutine are in?

AKA, in C I can return a piece of data with a specific type. In assembly that data changes registers when you return. Where is it? Or at least, where should I be looking in the CPU documentation to find out?

In principle: who says the assembler routine will even return? In actuality: you must read the author's documentation (and, seriously, you must trust the author).
 

Granadier

Is currently on Stage 1: Denial regarding the service game future
Does anyone here have experience with October CMS?

I'm trying to start working with it, but I've been running into walls with getting it set up. If you could direct me to a good help doc or tutorial that'd be great.
 

Mr.Mike

Member
So my province just launched an online gambling site. I found this is in the FAQ.
The Alcohol and Gaming Commission of Ontario (AGCO) Technical and Laboratory Services Branch carries out extensive testing using specialized computer technology, electronics, probability and statistics to certify that games are completely random.

Is that all BS, or is there some truth to it?
 
does anyone use visual studio? The program is very laggy on my laptop, the text doesn't appear instantly and is generally very slow and unresponsive.
 
does anyone use visual studio? The program is very laggy on my laptop, the text doesn't appear instantly and is generally very slow and unresponsive.
I use it on my desktop computer and used to use it on a 2011 MacBook pro running under bootcamp. How bad is your laptop?

Edit: is it doing anything else like parsing your files for intellisense? That can slow it down on a less beefy computer until it is finished.
 

Zoe

Member
So my province just launched an online gambling site. I found this is in the FAQ.


Is that all BS, or is there some truth to it?

All electronic gambling companies must get their games certified. Randomness is one of the criteria.
 
I suppose so. But I was under the impression that true randomness is very difficult, if not impossible.

Only randomness implemented in software. That's not to say all random number generators are equal though, and some are better than others. The best software random number generators are called "cryptographically secure". Most likely that is the bar that one of these machines must pass. And honestly that's not really that hard. Most programming languages' standard libraries come with cryptographically secure RNGs nowadays.
 
Most programming languages' standard libraries come with cryptographically secure RNGs nowadays.

Probably not. It's well known that standard random functions in language specifications are normally designed for repeatability, so that the same sequence will always be generated for any given seed. This specification makes cryptographic security impossible. Most well written language documentation will note that the standard random functions are completely unsuitable for cryptography. They're great for teaching and, to a certain extent, for generating test data.
 
Probably not. It's well known that standard random functions in language specifications are normally designed for repeatability, so that the same sequence will always be generated for any given seed. This specification makes cryptographic security impossible. Most well written language documentation will note that the standard random functions are completely unsuitable for cryptography. They're great for teaching and, to a certain extent, for generating test data.

Admittedly I don't know much about languages other than C++, but C++ has std::random_device.

std::random_device is a uniformly-distributed integer random number generator that produces non-deterministic random numbers. std::random_device may be implemented in terms of an implementation-defined pseudo-random number engine if a non-deterministic source (e.g. a hardware device) is not available to the implementation.

I was sure it was cryptographically secure though, so I checked the MSDN page for a second opinion.

The class describes a source of random numbers, and is allowed but not required to be non-deterministic or cryptographically secure by the ISO C++ Standard. In the Visual Studio implementation the values produced are non-deterministic and cryptographically secure, but runs more slowly than generators created from engines and engine adaptors (such as mersenne_twister_engine, the high quality and fast engine of choice for most applications).

So that's where my confusion came from. It is cryptographically secure on Windows, but not required to be by the standard. I just assumed other platforms and programming languages had caught up to the times by now.
 

Kalnos

Banned
Best way to program iOS apps/ run Xcode on Windows?

Honestly the best way is to use an OSX machine.

As far as Windows goes... depending on what kind of software you want to make you can check out Xamarin, PhoneGap, etc in order to deploy to iOS. Unity for games.
 
So that's where my confusion came from. It is cryptographically secure on Windows, but not required to be by the standard. I just assumed other platforms and programming languages had caught up to the times by now.

It's actually a fairly current topic. Much crypto is written in C, and rand is still specified to return deterministic results. The OpenBSD implementation of rand(3) in OpenBSD was recently switched to produced non-deterministic (and cryptographically strong) number sequences by default, thus breaking strict conformance with ANSI C. In OpenBSD's implementation, another function can be called to make subsequent calls to rand(3) produce deterministic results.

This standard-breaking change was done because a large proportion of all available packaged software was found to rely on rand, even that which would be expected to use strong crypto. The OpenBSD project manager is currently alerting the upstream maintainers of this security issue.
 

Kyuur

Member
Anyone here have much experience with Intellisense in VS2013 (C++)?

I'm bringing some inherited private methods into public with 'using namespace::baseclass::method' but they don't show up with the rest of the classes' public methods when typing. Really annoying.

Edit: It's actually less of a problem with methods as I can just do something like this:

void DerivedMethodWithSameName() { Base::BaseMethod() }

But I don't think there is another way to bring private inherited variables into public scope without creating getter/setter functions or something.
 
Anyone here have much experience with Intellisense in VS2013 (C++)?

I'm bringing some inherited private methods into public with 'using namespace::baseclass::method' but they don't show up with the rest of the classes' public methods when typing. Really annoying.

Edit: It's actually less of a problem with methods as I can just do something like this:

void DerivedMethodWithSameName() { Base::BaseMethod() }

But I don't think there is another way to bring private inherited variables into public scope without creating getter/setter functions or something.

There's lots of little quirks with Intellisense when you go digging around in the armpits of C++ like this. Nested classes is another big weak point.
 
Top Bottom