• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

A Human Becoming

More than a Member
Gah. I'm trying to stay positive, but I'm struggling so badly with a Coursera mini-project. :( Starting to think my brain just isn't built to code.
 

squidyj

Member
Is it recommended to use functions named "push" and "pop?" That's how our professor and material referred to them but I figured that was just used for instructional purposes.

*Edit*

Also, I've already changed the program and it works as intended. Guess doing it as a stack is a lot easier lol

yup =D

using the right tool for the job is quite important.
 

Milchjon

Member
Can anyone help me to grasp what the input size (n) in complexity theory actually is? I guess when traversing through a list for example, it would be the number of its elements, but surely it's not always that simple?
I feel this should be really easy to understand, but somehow I'm getting hung up on this.
 

squidyj

Member
Can anyone help me to grasp what the input size (n) in complexity theory actually is? I guess when traversing through a list for example, it would be the number of its elements, but surely it's not always that simple?
I feel this should be really easy to understand, but somehow I'm getting hung up on this.

no, no it is not. It depends on the algorithm. on a graph G composed of a set of vertices V and edges E your algorithm might be linear on |V| or on |E| or it might be |V| + |E| or all sorts of things. Your input might only be a single integer but your algorithm might depend on how large that integer is.

Your complexity can also be a function on the output as well.

It's really going to depend on the algorithm but looking at an algorithm I haven't generally found it difficult to tell.
 

Rapstah

Member
N is usually the input size though, regardless of what that means for the time complexity of the function. If I'm reading the question right.
 

V_Arnold

Member
I have a question to HTML5-gaf (if it exists :)

I hear that requestAnimationFrame is the new king in town.
Well, I just cant wrap my head around it, honestly.

Currently, the game I am working on uses two loops. One is the update loop, done at set intervals (1000/60 ms, basically), the other is the draw loop, done at set intervals, again (1000/30 ms).

The problem is this: I do not really understand how the requestAnimationFrame decides when to render stuff. And if it aims to 60fps, how can I force 30fps instead? The game uses manually drawn sprites, so there really is no gain for smoother animations, as it would only make the animation finish faster. Do I request an empty frame every time after a normal renderframe for 30fps or what?

Also, I am currently working with two canvases (one for the static backgrounds, other for the UI and the units), but I plan on doing three, actually. The back layer will be the background, the middle layer for animated units (30fps refresh), and the front layer would be the UI, which includes pointers, cursors, menu switches, which can easily be done at 60 or even 120fps, so that would further mess with rAF-s. And I am also thinking of introducing a fourth layer speficially for spell effects, which would mix traditional pixelart with canvas-powered effects. Hm.
 

roddur

Member
i'm using crystal report with vb.net. is it possible to return the current group field name/value from the report viewer to the application?
 
Anybody here use Bitbucket? If so, I have a particular question. For the registration they ask of a username along with e-mail address and password. Is the username is shown to everybody (public)? Or is the name is? Like on NeoGAF. When you register with your username, it is a public handle that everybody recognise you as. The screenshots on their website shows name instead of usernames. I am not clear on how it work.
 

Zoe

Member
I'm currently trying to clean up my code and make things more concise, so question:

Passing a SqlDataReader into a constructor from within a Read loop: good idea or bad idea?
 
I have a question to HTML5-gaf (if it exists :)

I hear that requestAnimationFrame is the new king in town.
Well, I just cant wrap my head around it, honestly.

Currently, the game I am working on uses two loops. One is the update loop, done at set intervals (1000/60 ms, basically), the other is the draw loop, done at set intervals, again (1000/30 ms).

The problem is this: I do not really understand how the requestAnimationFrame decides when to render stuff. And if it aims to 60fps, how can I force 30fps instead? The game uses manually drawn sprites, so there really is no gain for smoother animations, as it would only make the animation finish faster. Do I request an empty frame every time after a normal renderframe for 30fps or what?

Also, I am currently working with two canvases (one for the static backgrounds, other for the UI and the units), but I plan on doing three, actually. The back layer will be the background, the middle layer for animated units (30fps refresh), and the front layer would be the UI, which includes pointers, cursors, menu switches, which can easily be done at 60 or even 120fps, so that would further mess with rAF-s. And I am also thinking of introducing a fourth layer speficially for spell effects, which would mix traditional pixelart with canvas-powered effects. Hm.
I'm using it for the first time too - from what I undertand, requestAnimationFrame sends a request to the browser to repaint the screen as soon as possible - the time it aims for is meant to give you as close to 60fps as possible, and less when the tab is in the background, to save on power. So, the following gives you 60, as long as you keep the updating under the time of a 1/60th of a second:
Code:
function mainloop() {
	//...updating, drawing...
	requestAnimationFrame(mainloop);
}
To get drawing at lower framerates, you can use it in conjunction with setTimeout, like this example:
Code:
var fps = 30;
function mainloop() {
	//...updating, drawing...
	setTimeout(function() {
		requestAnimationFrame(mainloop);
	}, 1000 / fps);
}
thereby getting the time advantage of setTimeout with the efficiency/power savings of requestAnimationFrame. Doing things at different times would require some manual work probably - for example:
Code:
var updateFPS = 60,
    doDraw = true;

function mainloop() {
	//...updating...
	if (doDraw){
		//...drawing...
	}
	setTimeout(function() {
		doDraw = !doDraw;
		requestAnimationFrame(mainloop);
	}, 1000 / updateFPS);
}
Which works through doing drawing every other frame on a 60fps loop, giving 30fps drawing.

Of course, doing different layers at different rates would require more tracking, but I think you already have an idea on how to do so already, so yeah :)
Anybody here use Bitbucket? If so, I have a particular question. For the registration they ask of a username along with e-mail address and password. Is the username is shown to everybody (public)? Or is the name is? Like on NeoGAF. When you register with your username, it is a public handle that everybody recognise you as. The screenshots on their website shows name instead of usernames. I am not clear on how it work.
I believe they use your name if you filled in one, while repos will show your username.
 

squidyj

Member
That'd be amazing but I don't think it is. Any particular thing that makes you think it is?

I have an algorithm that seems to solve k clique decision problem in polynomial time (I am 100% sure it runs in polynomial time) with a fairly convincing outline for a proof of correctness and about 50k trials in an implementation I wrote this morning where it passed verification.... I'm working on graphs with 500 vertices right now.

Is there anything I don't know about the problem? some edge case I'm missing, it can't really be this easy.
 

Kalnos

Banned
I'm currently trying to clean up my code and make things more concise, so question:

Passing a SqlDataReader into a constructor from within a Read loop: good idea or bad idea?

Why do you need to do that? If you need every bit of data that's returned by the query then I would pass a DataSet... otherwise I would think you could read the values you need from the data and pass those to the constructor.
 
I have an algorithm that seems to solve k clique decision problem in polynomial time (I am 100% sure it runs in polynomial time) with a fairly convincing outline for a proof of correctness and about 50k trials in an implementation I wrote this morning where it passed verification.... I'm working on graphs with 500 vertices right now.

Is there anything I don't know about the problem? some edge case I'm missing, it can't really be this easy.

Usually most people think it's polinomial but don't realize they are pushing the complexity somewhere else. For example something that in code looks like one operation but the underlying implementation is dependent on the length of the input.
 

Zoe

Member
Why do you need to do that? If you need every bit of data that's returned by the query then I would pass a DataSet... otherwise I would think you could read the values you need from the data and pass those to the constructor.

Well like I said, I'm just trying to clean things up, make the code neater. Rather than having 10 lines of GetBlah that might have to be repeated a few times in different methods across different classes, I would just have all of that in the constructor.

I see some odd mentions here and there of SqlDataReader acting "weird" when you pass it around, but there are no real explanations.
 

Kalnos

Banned
Well like I said, I'm just trying to clean things up, make the code neater. Rather than having 10 lines of GetBlah that might have to be repeated a few times in different methods across different classes, I would just have all of that in the constructor.

I see some odd mentions here and there of SqlDataReader acting "weird" when you pass it around, but there are no real explanations.

Hm, I have never seen anything weird when passing a DataReader but who knows. All I can say is remember to close the reader when you're done with it.
 

squidyj

Member
Usually most people think it's polinomial but don't realize they are pushing the complexity somewhere else. For example something that in code looks like one operation but the underlying implementation is dependent on the length of the input.

I don't think so. there's a loop that will run at most the number of vertices + number of edges, number of edges is at most on the order of vertices squared in a simple graph and it's easy to generate a simple subgraph from a pseudo or multigraph. so that's squared on the outer loop.

then it checks each vertex and each edge, again for squared, we're up to 4th power now.

for each check it needs to know some stuff... about some stuff >_> in a step that will be at worst, squared.

for each execution of inner loop it may or may not do a thing that is linear in cost but this is dwarfed by the squared check.

this takes us up to 6th order, after that is a constant time op to decide, and the algorithm is done.
 

V_Arnold

Member
ColtraineGF:

Thanks, it seems like I will have to use a combination of the two. Which is not a problem, as my interface layer is quite independently working already (i.e. only the scenes tap into them, checking of something has been pushed, initiated, touched, whatever, and react accordingly).
 

tokkun

Member
I don't think so. there's a loop that will run at most the number of vertices + number of edges, number of edges is at most on the order of vertices squared in a simple graph and it's easy to generate a simple subgraph from a pseudo or multigraph. so that's squared on the outer loop.

then it checks each vertex and each edge, again for squared, we're up to 4th power now.

for each check it needs to know some stuff... about some stuff >_> in a step that will be at worst, squared.

for each execution of inner loop it may or may not do a thing that is linear in cost but this is dwarfed by the squared check.

this takes us up to 6th order, after that is a constant time op to decide, and the algorithm is done.

Two points:

First off, the fact that your estimate of the program's complexity doesn't include the value of k at all is a big red flag.

Second, k-Clique is not NP for fixed values of k. The 'easy' solution is O(n^k), and the best known solution is approximately O(n^(0.8*k)). The issue with the complexity of k-Clique is that the upper bound on k is n. Therefore, finding a clique of size n/2 for example has a complexity of ~ n^(n/2), which is not polynomial.

If you really want to test this out, pick a graph with a lot of nodes, that is highly connected, and set the value of k to a value like n/2.
 

squidyj

Member
Two points:

First off, the fact that your estimate of the program's complexity doesn't include the value of k at all is a big red flag.

Second, k-Clique is not NP for fixed values of k. The 'easy' solution is O(n^k), and the best known solution is approximately O(n^(0.8*k)). The issue with the complexity of k-Clique is that the upper bound on k is n. Therefore, finding a clique of size n/2 for example has a complexity of ~ n^(n/2), which is not polynomial.

If you really want to test this out, pick a graph with a lot of nodes, that is highly connected, and set the value of k to a value like n/2.

The thing is it. the only thing k would influence in my algorithm is the number of executions of the main loop, which remains bounded by the number of edges and number of vertices. Imagine if I selected a k such that k was the size of the graph I was given. the main loop would execute only once.

Likewise if I select a very low k, if k is 1 the outermost loop will execute exactly once.

if I were to select a mean or median k I could still have at most |V| + |E| executions of the main body of the loop, although this would be your best bet in maximizing those executions.

Increasing the density of the graph would serve to increase the degree of each node on average and that would serve to increase the running time of the core of the algorithm but since that part is quadratic, it can be at worst (|V|-1) ^ 2
 

Harpuia

Member
What's a simple way to turn an integer variable into it's equivalent char representation?

In other words, how would I go about making an int variable of let's say, 1, and save it to a char variable of 1.

I know that I could just use ASCII values to get the numbers 0-9 but what about 11 or 211 or anything else beyond one digit.

EDIT: I was hoping people who would help would not immediately post code, I just wanna be nudged into the right direction!
 

Aleph

Member
What's a simple way to turn an integer variable into it's equivalent char representation?

In other words, how would I go about making an int variable of let's say, 1, and save it to a char variable of 1.

I know that I could just use ASCII values to get the numbers 0-9 but what about 11 or 211 or anything else beyond one digit.

You mean in C? You would have to add the ascii value for 0 (48 I think) for each digit, and put them in a char[]. To "look" at each digit you can do: (number / 10^n-1)%10, where n is the digit index (starting from the right, starting with 1).
 

Harpuia

Member
You mean in C? You would have to add the ascii value for 0 (48 I think) for each digit, and put them in a char[]. To "look" at each digit you can do: (number / 10^n-1)%10, where n is the digit index (starting from the right, starting with 1).

C++ is what I'm using. And what I'd be needing this for is to number output of files from 1 to x depending on how many files I'd need to create. So the first file would be 1.txt, then 2.txt and so on until x.txt.
 

Aleph

Member
C++ is what I'm using. And what I'd be needing this for is to number output of files from 1 to x depending on how many files I'd need to create. So the first file would be 1.txt, then 2.txt and so on until x.txt.

I just remembered of the C standard library's itoa(int, char*, int) function, it receives an integer value and converts it into a null-terminated string (of course it is also available in C++). You could do a for loop from 1 to x, call itoa(int, char*, int) using your loop index variable, and create the file or save the char[] into an array for later use.
 

poweld

Member
What's a simple way to turn an integer variable into it's equivalent char representation?

In other words, how would I go about making an int variable of let's say, 1, and save it to a char variable of 1.

I know that I could just use ASCII values to get the numbers 0-9 but what about 11 or 211 or anything else beyond one digit.

EDIT: I was hoping people who would help would not immediately post code, I just wanna be nudged into the right direction!

Code:
...

itoa() is non-standard, so don't rely on it.

edit: Removed code as per your request. Look at "sprintf"
 

Slavik81

Member
C++ is what I'm using. And what I'd be needing this for is to number output of files from 1 to x depending on how many files I'd need to create. So the first file would be 1.txt, then 2.txt and so on until x.txt.

The C++ way of doing things with streams is a little complicated to set up and understand, but std::eek:stringstream would probably be the standard way of doing that in C++.

sprintf is not well liked, as you can easily end up with buffer overflow exploits. I may scoff at Microsoft's hated for strcpy and std::transform, but there's a very real problem in sprintf in that you cannot know how big a buffer to allocate for the output.

snprintf is a little safer, as you can both limit the amount of data you write to ensure you remain within your buffer, and you can query the amount of space the formatted output would take up, and be certain you've allocated enough memory for it. Though, this function is only standard in C++11 or C99.
 
So I am following this article to get my programming project on BitBucket.

I am new to this and all. I have installed the plugin correctly and everything. I am a little unsure about this command.

Code:
cd /path/to/my/repo

Is this the path where Visual Studio store my projects or a repository that I created with Git plugin?

The rest of the command doesn't work for me.

Code:
$ git remote add origin https://[username]@bitbucket.org/[username]/[folder].git
$ git push -u origin --all   # to push up the repo for the first time
 

Jokab

Member
git problems

Correct git practice is to initiate a new repo for every project you need on a remote such as bitbucket. So you do
Code:
cd /path/to/my/repo
which indeed means where Visual Studio or your IDE of choice stored your project. That means if /savedProjects/ is the folder then /savedProjects/myProject/ is the correct path. When you get there you do
Code:
git init
which initiates a new git repo in that directory.

When you've done that, you need to register that new repo with a remote. To do that you do the code you posted above (I'm not used to bitbucket but if it tells you to do this then it's probably correct).
Code:
$ git remote add origin https://[username]@bitbucket.org/[username]/[folder].git

You need to add you existing files to your local commit:
Code:
git add .   # this adds all the files

Then you need to make an initial commit, so you do:
Code:
git commit -a -m "Initial commit"
now you've created a commit in your local repo, but not in the remote one. To put it on the remote, you do the second line you posted:
Code:
git push -u origin --all

That should be all. I'm by no means an expert on this stuff but this is how I would do it.
 
Correct git practice is to initiate a new repo for every project you need on a remote such as bitbucket. So you do
Code:
cd /path/to/my/repo
which indeed means where Visual Studio or your IDE of choice stored your project. That means if /savedProjects/ is the folder then /savedProjects/myProject/ is the correct path. When you get there you do
Code:
git init
which initiates a new git repo in that directory.

When you've done that, you need to register that new repo with a remote. To do that you do the code you posted above (I'm not used to bitbucket but if it tells you to do this then it's probably correct).
Code:
$ git remote add origin https://[username]@bitbucket.org/[username]/[folder].git

You need to add you existing files to your local commit:
Code:
git add .   # this adds all the files

Then you need to make an initial commit, so you do:
Code:
git commit -a -m "Initial commit"
now you've created a commit in your local repo, but not in the remote one. To put it on the remote, you do the second line you posted:
Code:
git push -u origin --all

That should be all. I'm by no means an expert on this stuff but this is how I would do it.

Thank you so much! That was very helpful. Got it working.
 

squidyj

Member
Two points:

First off, the fact that your estimate of the program's complexity doesn't include the value of k at all is a big red flag.

Second, k-Clique is not NP for fixed values of k. The 'easy' solution is O(n^k), and the best known solution is approximately O(n^(0.8*k)). The issue with the complexity of k-Clique is that the upper bound on k is n. Therefore, finding a clique of size n/2 for example has a complexity of ~ n^(n/2), which is not polynomial.

If you really want to test this out, pick a graph with a lot of nodes, that is highly connected, and set the value of k to a value like n/2.

Here's some output from an implementation I wrote. I wound up having to rewrite everything from scratch as the library I was using before was giving me some weird results. I'm not sure all the data structures were properly updating, could easily have been my fault somewhere.

Anyways, these graphs were generated using the erdos-renyi algorithm with a 90% chance for an edge between any two vertices. It searches for a clique that is at least some fraction of the graph size. It does 15 graphs increasing vertex count by 10 each time. It does 10 loops of that where it increases the fraction of the clique by 1/10th each time until it's searching for complete graphs in the final pass.

http://www.pastebin.ca/2374578

Anyways, I just whipped this up today so there isn't much information in the output but you should at least be able to get some idea of the scaling. both with increased k and increased graph size.

Edit: rewrote a comparison function to take advantage of my sorted lists (was brute forcing it n^2 for testing purposes). upped the probability factor to 99.5% up to 300 size graph with k = 1/2 n, MUCH better performance.

http://pastebin.com/7s73HGgs
 

tokkun

Member
Here's some output from an implementation I wrote. I wound up having to rewrite everything from scratch as the library I was using before was giving me some weird results. I'm not sure all the data structures were properly updating, could easily have been my fault somewhere.

Anyways, these graphs were generated using the erdos-renyi algorithm with a 90% chance for an edge between any two vertices. It searches for a clique that is at least some fraction of the graph size. It does 15 graphs increasing vertex count by 10 each time. It does 10 loops of that where it increases the fraction of the clique by 1/10th each time until it's searching for complete graphs in the final pass.

http://www.pastebin.ca/2374578

Anyways, I just whipped this up today so there isn't much information in the output but you should at least be able to get some idea of the scaling. both with increased k and increased graph size.

Edit: rewrote a comparison function to take advantage of my sorted lists (was brute forcing it n^2 for testing purposes). upped the probability factor to 99.5% up to 300 size graph with k = 1/2 n, MUCH better performance.

http://pastebin.com/7s73HGgs

How are you verifying the correctness of your results?
 

squidyj

Member
How are you verifying the correctness of your results?

On a positive result
The algorithm spits back out a graph and I take the list of vertices and check it against the input graph. If every vertex in the list is connected to every other vertex in the list in the original graph then we can safely declare it to be a clique. And, of course, check if they're in the graph. Of course they will be because the graph produced is strictly a subgraph of the given graph.

Negative Result is an issue. how do you verify that there aren't any cliques if your known correct algorithms to find cliques are intractible? I think that's going to come down to writing the proof but the logic of it is I never remove an element that could possibly be in the last remaining clique of size k or greater. (Actually the algorithm should work if I never remove an element that could possibly be part of any clique of size k or greater, but this makes the positive verification very messy). I suppose I should have rewritten my output so it doesn't put verified on the negative cases.
 

Slavik81

Member
The C++ way of doing things with streams is a little complicated to set up and understand, but std::eek:stringstream would probably be the standard way of doing that in C++.

sprintf is not well liked, as you can easily end up with buffer overflow exploits. I may scoff at Microsoft's hated for strcpy and std::transform, but there's a very real problem in sprintf in that you cannot know how big a buffer to allocate for the output.

snprintf is a little safer, as you can both limit the amount of data you write to ensure you remain within your buffer, and you can query the amount of space the formatted output would take up, and be certain you've allocated enough memory for it. Though, this function is only standard in C++11 or C99.

You know, I totally forgot about this, but there is also std::to_string, if you have a C++11 compiler. It is by far the simplest option if it is supported.
 

Bruiserk

Member
I was looking up some interview questions since I have my first programming interview on Wednesday. It is a Javascvript/C# job, and even though I have didn't list those two languages on my resume, they requested an interview. I'm not sure if I'll get the job, I am mainly going because I think it will be a good experience.

Anyways, one question I saw was: Explain the difference between “equality” and “equivalence”.

I realized I wasn't sure how to answer this question, so I did some research. Of course, equality refers to the value and type of the data being the same ie:

int a = 3;
int b = 3;

a = b;

For equivalence, would this be a suitable answer?

Equivalence between two things depends on their truth value given a certain property (I'm not sure if property is the right word here) ie:

Given two arrays, each with the integer c, and two integers a,b where int a is in array 1 and int b is in array 2. If int a appears before c in array 1 and int b appears before c in array 2, then a and b are equivalent with respect to the ordering property that they both appear before c.
 
I was looking up some interview questions since I have my first programming interview on Wednesday. It is a Javascvript/C# job, and even though I have didn't list those two languages on my resume, they requested an interview. I'm not sure if I'll get the job, I am mainly going because I think it will be a good experience.

Anyways, one question I saw was: Explain the difference between “equality” and “equivalence”.

I realized I wasn't sure how to answer this question, so I did some research. Of course, equality refers to the value and type of the data being the same ie:

int a = 3;
int b = 3;

a = b;

For equivalence, would this be a suitable answer?

Equivalence between two things depends on their truth value given a certain property (I'm not sure if property is the right word here) ie:

Given two arrays, each with the integer c, and two integers a,b where int a is in array 1 and int b is in array 2. If int a appears before c in array 1 and int b appears before c in array 2, then a and b are equivalent with respect to the ordering property that they both appear before c.

Not an expert but probably refers to the difference between the == and === operators in javascript.
 
The interview questions weren't javascript specific, but I'll take a look anyways, thanks.

In general for programming you are looking at (and these questions are tough because you can really get thrown in an interview depending on your background and the meaning of equivalence):

int a = 1;
double b = 1.0;

You can see that a does not equal b, because they are different types (int and double).

However a is equivalent to b, because their value is essentially 1.

In general you might hit/use this because of limitations representing some floating point numbers or similar cases like that. You should also brush up on the differences between comparing/passing a value or reference, because that is probably where they are going with that.
 
Yeah, in my experience interview questions usually end up expecting you to have instantaneous, very detailed knowledge of the by-the-book CS algorithms and data structures (because "IT'S IMPORTANT" even though you'll be able to look it up on the rare occasions you need it) and a broad, shallow knowledge of everything else. Sometimes language knowledge gets thrown into the former category, sometimes the latter. Though if you claim to have experience, they're more likely to call you on it...

The interview dichotomy always throws me, tbh. I just can't bring myself to study up on this irrelevant garbage. Implement quicksort? Eff you, I'll use qsort! I'm trying to do real work over here!

=)
 
Top Bottom