• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Pau

Member
No, it's not. You're generating a new set of samples.

This snippet basically does the bootstrap (unless sample here doesn't work either), still not sure what's wrong with your original code.

Code:
resample <- vector(mode="numeric", length=n)
for (i in 1:n)
{
  t <- sample(1:n, 1)
  resample[i]=x[t]
}
It was the vector that threw me off. Seems like my issue was that I imported the data into a data frame and not a vector. Once I converted, using resample and mean worked. Thank you!
 

Mr.Mike

Member
People seem to be entirely comfortable using the term "ternary operator" as a synonym for "conditional operator" in the context of programming. But strictly the term ternary operator just refers to an operator taking three operands.

Does anyone here know of a programming language that has more than 1 ternary operator, or perhaps a ternary operator that isn't used as a conditional operator?
 
I have a request. I need help with a program that involves Heap implementation. My adjustHeap function is wrong according to an automatic testing program my Professor provided, but I cannot figure out how or why it is wrong.

The program is in C. If anyone is willing to help I'd greatly appreciate a chat via PM.
 

robin2

Member
Probably insanely stupid question about Eclipse IDE:
why the file 'Lesson44.java' gets displayed twice, both outside and inside the default package? See picture.

ddgett.png


I created it like all the other files (R-click > New > ), but all the other files were placed just within the default package. So the change for that file happened suddenly and seemingly for no reason. Maybe is it because I upgraded from "normal Eclipse" to "Eclipse for Java EE"?

PS I have many more files inside the default package, I edited the image to keep it smaller.
 

Ambitious

Member
Oh boy. Gonna have my first job interview next Wednesday. I hope I can keep my nerves under control, otherwise it's gonna be an embarrassing disaster.

It was alright. I was nervous and said a few stupid things, but it wasn't as bad as it could have been. They think I'd be a good fit for the company, they said, and I have a "high chance" to be hired. They're gonna tell me their decision next Wednesday.

The company sounded pretty cool. Personal atmosphere, flat hierarchy, training opportunities, a lot of freedom, nice perks, and the best view in town. It's mainly Java Enterprise stuff, but they also use other languages and technologies from time to time. I'm fine with that.
 

KageZero

Member
Does anyone know any open source program in c# with a lot of interfaces usage so i can download it and work on it?
I'm having a bit of troubles with them at my job i know how they work in theory and done some basic stuff with them but here im lost... The code is so huge and they are used everywhere but i have problems when needing to create my own to make some modular components or casting different objects with same methods that implement my interface... Maybe i haven't explain it very well because i don't even know what exactly to ask
 
Anyone familiar with CMake? I'm tripping through their tutorials and documentation right now. It's seemingly more of a project builder than a cross-compiling tool right? For example, if I invoke CMake on Windows it's going to build my solution files and whatnot so I can invoke msbuild instead of invoking cl.exe and building an executable out of my source?
 
Anyone familiar with CMake? I'm tripping through their tutorials and documentation right now. It's seemingly more of a project builder than a cross-compiling tool right? For example, if I invoke CMake on Windows it's going to build my solution files and whatnot so I can invoke msbuild instead of invoking cl.exe and building an executable out of my source?

Depends what generator you use. Yea, if you use the Visual Studio generator it will generate a VS project, and then you can load it up in VS and build it. But there are other generators. For example, you can use the ninja generator, which will generate a ninja build file, and then you can run ninja to compile. But yes, CMake does not actually compile any code. It only specifies a compiler / platform agnostic language that you can use to generate code that works with other build systems (make, VS, ninja, etc)
 
Depends what generator you use. Yea, if you use the Visual Studio generator it will generate a VS project, and then you can load it up in VS and build it. But there are other generators. For example, you can use the ninja generator, which will generate a ninja build file, and then you can run ninja to compile. But yes, CMake does not actually compile any code. It only specifies a compiler / platform agnostic language that you can use to generate code that works with other build systems (make, VS, ninja, etc)

Got it. Ok, thanks!
 

Kalnos

Banned
Is Cracking the Coding Interview an actually good book?

It just depends on where you're interviewing to be honest. If you're going to try to apply at companies like Microsoft, Google, etc then it's probably a good resource. I have only worked at smaller companies <30 people and honestly whiteboard interview questions have never really gotten much more complicated than "FizzBuzz".
 

vypek

Member
Is Cracking the Coding Interview an actually good book?

It seems like a good book for brushing up on your knowledge when going to big companies. I have to admit that I didn't read through too much of it because the day after my copy was delivered, I got my current job
 

Ōkami

Member
I'm having trouble with Matlab.


I have this image, right now we're doing region of interest and the idea is to isolate that rectangle, I have to make it binary first so I used the methods rgb2gray and then im2bw.

I have to make a function that returns me the coordinates of the 4 corners of the rectangle, right now I have this.

Code:
function [p1] = roi(img)
%UNTITLED6 Summary of this function goes here
%   Detailed explanation goes here
[x,y] = size(img);
aux1 = 0;
aux2 = 0;
for i =1: x
    for j = 1: y
        if img(i,j) == 0
            aux1 = i;
            aux2 = j;
            break
        end
    end
end
p1 = [aux1,aux2];
end
That one just returns the top left point for the time being, or it's supposed to at least, it doesn't work.

The idea of the function is for it to search the image pixel by pixel, once it gets to the first black pixel it's supposed to give me the coordinate, but it returns me a wrong value.

By checking the image with imtool it tells me that the correct pixel is [160,81] yet this code returns me the valie of [294,160], which is completely wrong, in fact that pixel is somewhere inside the rectangle.

Guys, what am I doing wrong?
 
It just depends on where you're interviewing to be honest. If you're going to try to apply at companies like Microsoft, Google, etc then it's probably a good resource. I have only worked at smaller companies <30 people and honestly whiteboard interview questions have never really gotten much more complicated than "FizzBuzz".

Good to know. I'll probably still go over it just in case, but I really don't think I'm a Microsoft/Google.Amazon/Facebook/Apple level programmer. I feel like those are the people who started coding and learning about this shit when they were 7 years old and do it for fun in their free time. And I just kind of want to have it as a job rather than as a job and a hobby.
 

JeTmAn81

Member
Is Cracking the Coding Interview an actually good book?

I'm enjoying it as a skills challenge and general brushup on major areas of CS. The example problems seem really good and a lot of thorough explanation is given. It seems like some 80% of the book is answers to problems. But I do think most of it is only really applicable if you are interested in applying to a big high standards company.
 
I'm enjoying it as a skills challenge and general brushup on major areas of CS. The example problems seem really good and a lot of thorough explanation is given. It seems like some 80% of the book is answers to problems. But I do think most of it is only really applicable if you are interested in applying to a big high standards company.

Hmm, why wouldn't it be applicable for lower standard companies? Wouldn't it be better to overprepare?
 

JeTmAn81

Member
Hmm, why wouldn't it be applicable for lower standard companies? Wouldn't it be better to overprepare?

Oh sure, it doesn't hurt you to know more. Just I think smaller companies are less likely to be as strenuous in their questioning.

I've just had two interviews (one with the CEO) for a 90k/year job (a high salary for this area) and they asked me exactly two technical questions. They asked me to identify a few C# keywords and to tell them what the === Javascript operator does.
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Oh sure, it doesn't hurt you to know more. Just I think smaller companies are less likely to be as strenuous in their questioning.

I've just had two interviews (one with the CEO) for a 90k/year job (a high salary for this area) and they asked me exactly two technical questions. They asked me to identify a few C# keywords and to tell them what the === Javascript operator does.

wat
 

10 years ago i had an interview for a 95k/year job (also high for that area, back then anyway) and the 1 question was "write a function to count the number of 1 bits in a 32 bit integer"

Bigger more prestigious companies can pick whoever they want, so the questions are hard. Small companies are often struggling just to find someone who is a notch above face rolling their keyboard
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Bigger more prestigious companies can pick whoever they want, so the questions are hard. Small companies are often struggling just to find someone who is a notch above face rolling their keyboard

I understand the rationale behind this but wouldn't smaller companies also have more difficulty absorbing the opportunity cost of hiring a (eventually revealed) dunce than a large company?

Or am I to assume that small companies are just extremely fragile due to the above?
 
I understand the rationale behind this but wouldn't smaller companies also have more difficulty absorbing the opportunity cost of hiring a (eventually revealed) dunce than a large company?

Or am I to assume that small companies are just extremely fragile due to the above?

In the general case probably yes. I imagine most small companies don't pay all that well. I always found small company jobs through friends and professional networking. Having someone reputable vouch for you pretty much eliminates that risk
 
10 years ago i had an interview for a 95k/year job (also high for that area, back then anyway) and the 1 question was "write a function to count the number of 1 bits in a 32 bit integer"

Bigger more prestigious companies can pick whoever they want, so the questions are hard. Small companies are often struggling just to find someone who is a notch above face rolling their keyboard

This is a single instruction in assembly. :^)

http://www.tptp.cc/mirrors/siyobik.info/instruction/POPCNT.html
 

JeTmAn81

Member

That seems like cheating.

I'm not sure what the best way to do it in higher level languages is. My immediate naive solution is to work backwards from the highest exponent of 2 (2^32) and check each number against the number in question to see if the number is >= to it. If it is, subtract that from the number and increment a counter for the quantity of 1 bits. Continue until the number is zero or you've passed the 0th exponent of 2. A more efficient version of this would probably be to start at the 0th exponent and work upwards instead.
 

Somnid

Member
That seems like cheating.

I'm not sure what the best way to do it in higher level languages is. My immediate naive solution is to work backwards from the highest exponent of 2 (2^32) and check each number against the number in question to see if the number is >= to it. If it is, subtract that from the number and increment a counter for the quantity of 1 bits. Continue until the number is zero or you've passed the 0th exponent of 2. A more efficient version of this would probably be to start at the 0th exponent and work upwards instead.

Use bitshifting (>>) and & with a bit-mask of 1.
 

Koren

Member
Hash tables, the answer is always hash tables.
I'd be leaning this way in real world, unless there's memory issues. There's some context needed.

Anyhow, I think you can discuss pro and cons of half a dozen solutions for the better part of an hour if you need to. Assuming it's not just a stupid question, they may be fishing for people that stumble upon this kind of algorithms/techniques in their history.

Pretty sure you can discuss this problem 45+ minutes by
- presenting the >>1 / %2 solution as the easiest solution (and simplest to maintain),
- suggesting hash tables for speed, at the expense of memory footprint (and explain how to shorten the table, and how to build it quickly)
- describing the Kernighan algorithm as a nice theorical solution (throwing proof of termination there)
- presenting the divide and conquer solution as a solution if you don't have memory (and it's probably the best one to translate into hardware form, FPGA-stye)
 
So the best non intrinsic solution is
a lookup table. Somnid was close with the hash table answer. Then there's the question of how big is the lookup table? 1 byte? 2 bytes? Obviously 4 bytes is too big. Whether 1 or 2 is the right answer probably depends on your use case: How common is this operation, are the numbers usually big, usually smll, etc? You'd probably want to profile it.

Code:
int popcnt(uint32_t n) {
    return t[n&0xFFFF] + t[n >> 16];
}

Edit: I guess code can't be spoiler tagged very well. Oops
 
Exactly the one CPPKing gave above (pretty sure he meant lookup tables and not hash tables).

You have to fill the table t with 32768 entries before with the number of bits in the first 32768 integers.

Technically a lookup table is a hash table where the hash function is the identity :) So I guess you could say a hash table is right.
 

JeTmAn81

Member
Let's explore this solution a bit more:

Code:
int popcnt(uint32_t n) {
    return t[n&0xFFFF] + t[n >> 16];
}

So we're indexing into an array twice. This array should hold the number of 1 bits for a specific span of numbers. In order to avoid creating an array that has as many possible entries as a 32-bit number (4,294,967,296), we can view each half of the number as its own number. A 16-bit sequence of number will have the same number of 1 bits if it starts at the 0th index or at the 16th.

So that means we only need enough entries in the lookup table to hold 2^16 (65,536) 1-bit counts. A much more palatable memory footprint.

Then we have to cut our 32-bit integer in half in order to get the number of 1 bits for each half from the array and add them together to get the total. Anding a 32-bit number with 0xFFFF ends up zeroing out the upper half of the number since the other 16 bits in 0xFFFF are all zeroes. Like so:


00000000001011011100011011000000 (3,000,000 in decimal)

&

00000000000000001111111111111111

=

00000000000000001100011011000000 (50,880 in decimal)

So that left half gets its zeroes cleared out, and we can just regard the right half as the number to index into the array. t[50880] is 6.

To get the left half, we right bitshift the number by 16. This takes all the bits from the left half and puts them in the right half, giving us a number that's a valid index into the array.


00000000001011011100011011000000 (3,000,000 in decimal)

>> 16

=

00000000000000000000000000101101 (45 in decimal)

t[45] = 4.

So you've now calculated the 1 bit counts for both halves. Add them together and you get 6 + 4 = 10, which is the number of 1 bits in the whole 32-bit number.

Hopefully I got all those numbers right, they kept getting messed up when I was working on them.
 

Milchjon

Member
You guys are killing me.

Very dumb question, but a quick search didn't give me a satisfactory answer:
Is there a good reason why server- and client-side scripting is done in different languages?

I guess stuff like node.js (from what I understand) is blurring it, but why not have one language for both purposes from the start?
 
Exactly the one CPPKing gave above (pretty sure he meant lookup tables and not hash tables).

You have to fill the table t with 32768 entries before with the number of bits in the first 32768 integers.

I guess I don't know what lookup tables are or how they work. Where does 32768 come from?

How is this faster than looping over the 32 individual bits and counting the 1s?
 

Haly

One day I realized that sadness is just another word for not enough coffee.
I guess I don't know what lookup tables are or how they work. Where does 32768 come from?

How is this faster than looping over the 32 individual bits and counting the 1s?

The lookup table, t[] in JeTmAn's example, contains the bitcounts of every 16-bit number. As an visual example, this is a lookup table (maybe you'd be more familiar with the term 'dictionary'?) of all 2-bit integers.

Index - Value - Binary
[0] - 0 - 00
[1] - 1 - 01
[2] - 1 - 10
[3] - 2 - 11

I think it's obvious why the lookup is faster than looping through 32 bits, but it requires that the lookup table be built first.
 

Koren

Member
I guess I don't know what lookup tables are or how they work. Where does 32768 come from?
Should have been 65536, sorry. Thats 2^16 (32768 is 2^15).

You probably know that Integers are stored in binary. Using 32 bits for... 32 bits integers. CppIsKing solution counts the number of bits in the lower 16 bits, and in the higher 16 bits, and sum the results.

There's 65536 values in 16 bits... You just use a "giant" table with the number of bits pre-computed for all integers between 0 and 65535.

How is this faster than looping over the 32 individual bits and counting the 1s?
Looping over the 32 bits is roughly:
- 32 right shift
- 32 increments
- 32 bitwise &
- 31 short jumps on non-zero

CppIsKing solution is
- 1 right shift
- 1 bitwise &
- 2 indirect memory access
- 1 sum

The second solution wins hands down... Even if the memory access can be a bit slower than other operations, it's at the very least one order of magniture faster, an probably nearly two.
 
I guess you could say "but the loop solution can be done entirely in registers, so would 100ish register operations be slower than 2 memory loads?" This is why you need to profile.

It turns out in this case that yes, it's slower so the lookup table wins. But the results can sometimes be surprising.

Another thing to consider is whether to do a 2 byte lookup table or 1 byte. With a 1 byte lookup table, you'll have a pretty respectable cache hit rate and with a 2 byte lookup table it will be close to 0. So again if you were using this solution in the real world you'd want to benchmark different solutions with your actual data. But in reality, you'd almost always have an intrinsic available, so this is mostly just theoretical :)
 

Koren

Member
I agree that profiling is the ultimage judge, even if I'll only spend time profiling parts of code that are used 80% of the time (and try to guess from experience what's best for the others).

A memory look-up shouldn't take 100+ cycles, anyway (it kinda does on N64, but that's a special case, and you would put the look-up table in the scratch pad, anyway).


Still, you use less than a couple thousands times the table, and there's no interactive/realtime parts, the look-up table could still not be the best solution.


I'm still waiting for a FPGA co-processor in computers... I've seen it as a proof of concept twenty years ago, and it's just great for this kind of stuff (although multitasking makes things a bit harder)
 
Should have been 65536, sorry. Thats 2^16 (32768 is 2^15).

You probably know that Integers are stored in binary. Using 32 bits for... 32 bits integers. CppIsKing solution counts the number of bits in the lower 16 bits, and in the higher 16 bits, and sum the results.

There's 65536 values in 16 bits... You just use a "giant" table with the number of bits pre-computed for all integers between 0 and 65535.


Looping over the 32 bits is roughly:
- 32 right shift
- 32 increments
- 32 bitwise &
- 31 short jumps on non-zero

CppIsKing solution is
- 1 right shift
- 1 bitwise &
- 2 indirect memory access
- 1 sum

The second solution wins hands down... Even if the memory access can be a bit slower than other operations, it's at the very least one order of magniture faster, an probably nearly two.

Where's this table coming from? The "uint32_t n" thing?
 

JeTmAn81

Member
Where's this table coming from? The "uint32_t n" thing?

I can see where this is confusing.

First of all, let's say this. In the most immediate use case, simply calculating the number of 1 bits in a single number, using a lookup table would not be the fastest, because you'd first have to spend the time to build that lookup table before using it.

So in that instance you would want to use a technique like doing a binary and operation between your target number and each power of two and checking if the result of that operation is zero. If it's zero, that means the bit in the position being checked is zero, and if it's not then the bit in that position is one.

For instance, say your target number is 13. That would make it this in binary:

00000000000000000000000000001101

You'd start cycling through those powers of two:

00000000000000000000000000001101
&
00000000000000000000000000000001 (1 in decimal, or 2 to the 0th power)
=
00000000000000000000000000000001

So that leaves us with a result that is NOT zero, indicating the target number has a 1 in that first position.

Next, you'd try the next power of two:

00000000000000000000000000001101
&
00000000000000000000000000000010 (1 in decimal, or 2 to the 0th power)
=
00000000000000000000000000000000

Notice how there is just a single 1 in our comparison number, moving over one position? That's our bit mask, letting us compare just that digit. In this case, the digit in that position in the target number is zero, henceforth we get a zero when we do the binary and operation.

And so on until you've checked all the digits.

That is one way of checking an actual number without the lookup table.

The lookup table would be most useful if you are going to be checking this result for a bunch of different numbers. It prevents you from having to do this kind of calculation every time.

The uint32_t n parameter in the original example is the target number. You might be confusing the _t at the end with the t[] array that's also referenced, but that t[] array is a different variable defined somewhere else. You can't actually see its definition in the example so it's assumed that it's defined globally and you can reach it from the popcnt function.
 
I can see where this is confusing.

First of all, let's say this. In the most immediate use case, simply calculating the number of 1 bits in a single number, using a lookup table would not be the fastest, because you'd first have to spend the time to build that lookup table before using it.

But you can do that at compile time

Code:
int t[] = {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 
3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 
4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2,
 3, 3, 4, 3, 4, 4, ...};
 
Top Bottom