• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Makai

Member
Can someone here walk me through how big O notation works? I understand that it's basically how long it would take the program to get though everything but how would you do something like showing that n is O(nlogn)?
A nested for loop is N^2 because you're doing something N times in a loop that runs N times. Log N is something like doing a binary search. So, N log N could be doing a binary search N times.
 

Koren

Member
Can someone here walk me through how big O notation works? I understand that it's basically how long it would take the program to get though everything but how would you do something like showing that n is O(nlogn)?
it means that there is a real number a and an integer No, that verify, if your algorithm work with N>No data (typ. a table with N elements), the time taken by the algorithm is always lower than a*N*log(N)

Usually, you count the most common operation (comparisons in a sorting algorithm) and take the buggest term.

Let's take fusion sorting.

You'll work with
- N data
- two times N/2 data -> N
- four times N/4 data -> N
- ...

There is log(N) lines, so N.log(N) operations. Thus O(N.log(N))
 
Can someone here walk me through how big O notation works? I understand that it's basically how long it would take the program to get though everything but how would you do something like showing that n is O(nlogn)?

Instead of "how long", think of it more as "how many iterations, as a function of the input size".

For example:

Question: How many iterations, as a function of the index you're searching for, does it take to find the 7th item in an array?
Answer: 1. You just return the 7th item

Question: How many iterations, as a function of the index you're searching for, does it take to find the 7th item in a linked list?
Answer: 7. Start at the beginning and move to the next item until you do that 7 times.

Question: How many iterations, as a function of the array size, does it take to the find the n'th item in a linked list?
Answer: n.

Note the difference here between questions 2 and 3. Questions 2 and 3 translate into code as if you're writing this function:

Code:
int find_7th_item(Node *head);
int find_nth_item(Node *head, int n);

The first one always takes 7 iterations, it doesn't matter what you do. That's called constant time and is always written O(1). The second one takes n iterations, the number of iterations changes depending on how you call it. In this case it's called linear time, and written O(n).

O(n) is worse than O(1) because with a big enough argument, your algorithm will begin to run slower.

What about more complicated examples?

Suppose you want to write this function:

Code:
int dump_binary_search_tree(Node *root, int levels_deep)

Assume the BST is complete, in the sense that every single node has both a left and a right.
To go 0 levels deep: 1 iteration
To go 1 level deep: 3 iterations
To go 2 levels deep: 7 iterations
To go 3 levels deep: 15 iterations

So it's growing exponentially. The exact number of iterations is 2^(levels_deep + 1) - 1. You can draw this out on paper to prove it. If you want to go 7 levels deep, you will have to iterate over 255 nodes. So we say this has O(2^n) complexity.

On the other hand, n could be somethign else. If n is the number of nodes in the tree, then it's O(n) complexity (because if there's 255 nodes in the tree, you're going to iterate over 255 items).


What if, on the other hand, we want to know about adding an item to a binary search tree? An arbitrary number, we have no idea where it's going to end up.

In the best case scenario you've got a tree that is actually a straight line (imagine you inserted the numbers 2, 3, 4, 5, 6, 7 in that order, it would just be a straight line with each node having it's "right" pointer set) and now you try to insert the value 1. Boom, only 1 place it can go, on the left. O(1). On the flip side, the *worst* case is where you have that exact same tree, and you try to insert the number 8. Now that's O(n), because it's literally the last item you examine in the tree after you've examined every other item.

Best and worst case are fairly rare though, usually what people care about is average case. And in the average case (assuming you are working with a balanced tree), there will be about log(n) levels in the tree (remember back to the earlier example about how iterating over every item is 2^n in terms of # of levels. For any balanced tree, there are 2^levels nodes, and log_2(nodes) levels. Just basic discrete math.

So on average, you will have to look at log(n) nodes before you find the place where the new item goes. So that's O(log n).


So what's O(n log n) mean then? Imagine you are trying to build a binary search tree from scratch. You've got a list of n numbers, and you want to make a binary search tree out of it.

Inserting 1st item: log(1)
Inserting 2nd item: log(2)
Inserting 3rd item: log(3)
Inserting nth item: log(n)

Inserting all the items = log(1) + log(2) + ... + log(n) = log(1*2*...*n) = log(n!).

Note this is actually less than n log n, because n log n = log(n^n), and n! < n^n.

Finding an item in a binary search tree is basically the same as inserting, because to insert first you have to find, then you just move some pointesr around. So finding is O(log n) as discussed when I mentioned inserting.

Say on the other hand you've got peoples' data records in a binary search tree sorted by name, and you want to go through them in a particular order (you already have the names in some external order) and you want to process them in exactly that order. That requires n lookups, each of which is O(log n), hence giving you O(n log n)
 
For a divide-and-conquer sorting algorithm, there are "n" comparisons done "log(n)" times, as visualized below.

qKq4EE3.gif
 

zeemumu

Member
Well, going back to the first one, if I wanted to prove that n is O(nlogn), that would mean that there's a constant and an integer constant where Cnlogn >= n, right? So that would mean that I'd have to prove that there's an area where nlogn is growing faster than n, could I prove that with the limit of n/nlogn as n approaches infinity?

Uhh, n > log(n). O(1000 log n + 5n) = O(n)

Would it be considered proof if you show that Cn > 1000n+5n for some C and some n? Like if C=2000 and 2000n > 1000n + 5n for all n > 2 or something or something along those lines?
 
Yea you need to find C such that Cn > 5n + 1000log(n) for all n.

This is pretty easy, just combine and isolate.

C > 5 + 1000log(n)/n

Find the max, it occurs when n=e so just choose C bigger than the max
 

Nelo Ice

Banned
So checking out that lectures on algorithms class on coursera even though it ended. And damn nothing is making sense. I'm also looking through stuff like cracking the coding interview and all those algorithm challenges sites and they still confuse the hell out of me.

Like I have no idea how to wrap my head around creating and coming up with any algorithims. Like I could stare at a hackerrank, leetcode, coderbyte etc question forever and still have no clue what to to type in. Yet if I saw a solution I could probably break it down and figure out what it does but not have a clue as to how to come up with the answer or why one solution is better than the other.

Basically this is where I feel being self taught is a weakness. Have not found anything that could help me understand theory and algorithms on my own. I'm attempting to study since I have a phone interview next week with a company I really wanna work for but I'm just drawing blanks every single time I study or try to answer an algorithm question :(.
 

upandaway

Member
So checking out that lectures on algorithms class on coursera even though it ended. And damn nothing is making sense. I'm also looking through stuff like cracking the coding interview and all those algorithm challenges sites and they still confuse the hell out of me.

Like I have no idea how to wrap my head around creating and coming up with any algorithims. Like I could stare at a hackerrank, leetcode, coderbyte etc question forever and still have no clue what to to type in. Yet if I saw a solution I could probably break it down and figure out what it does but not have a clue as to how to come up with the answer or why one solution is better than the other.

Basically this is where I feel being self taught is a weakness. Have not found anything that could help me understand theory and algorithms on my own. I'm attempting to study since I have a phone interview next week with a company I really wanna work for but I'm just drawing blanks every single time I study or try to answer an algorithm question :(.
If you want to train an intuition in which method to use (like divide and conquer, dynamic/greedy), best way to do that is solve a bunch of exercises where you are already told/hinted about it. And if you have a question where you want to know the thought process behind coming up with the solution, we can always try to help
 

Nelo Ice

Banned
If you want to train an intuition in which method to use (like divide and conquer, dynamic/greedy), best way to do that is solve a bunch of exercises where you are already told/hinted about it. And if you have a question where you want to know the thought process behind coming up with the solution, we can always try to help

Thanks. So far I've found it helpful seeing solutions and someone commenting on each solution explaining what the code does. From there I can usually pick up what's going on. Then I like seeing another solution and seeing the differences. It's especially helpful when there's some solutions that make no sense to me but seeing a different answer helps me understand.
 
So I'm looking for some advice.

My situation is, I'm about to finish an AS in CompSci and then I'm transferring to a different school to complete my bachelor's in CompSci. Thing is, I don't really do any coding or any "computer stuff" in general outside of my classes. I'm great in my classes and finish everything in class days before its due and I think I'm pretty good for someone who only codes several hours a week. So far I've taken stuff like Java, VB, C/C++, Discrete Math, Linear Algebra, Systems Analysis/Design.

Basically what I want to ask is, what exactly should I be doing and working on outside of class on my own to improve my skills, learn, and eventually start doing shit I can slap on a resume in the long run. I mean should I just read some books to start? Just do random coding projects as much as possible? Should I look into learning a few languages or just focus on one? I really want to supplement what I'm learning and work on stuff I wont learn in class rather than learn stuff outside of school and retread it next semester.

Not necessarily involving strictly coding but anything related to software engineering.

Thanks.
 

w3bba

Member
So I'm looking for some advice.

My situation is, I'm about to finish an AS in CompSci and then I'm transferring to a different school to complete my bachelor's in CompSci. Thing is, I don't really do any coding or any "computer stuff" in general outside of my classes. I'm great in my classes and finish everything in class days before its due and I think I'm pretty good for someone who only codes several hours a week. So far I've taken stuff like Java, VB, C/C++, Discrete Math, Linear Algebra, Systems Analysis/Design.

Basically what I want to ask is, what exactly should I be doing and working on outside of class on my own to improve my skills, learn, and eventually start doing shit I can slap on a resume in the long run. I mean should I just read some books to start? Just do random coding projects as much as possible? Should I look into learning a few languages or just focus on one? I really want to supplement what I'm learning and work on stuff I wont learn in class rather than learn stuff outside of school and retread it next semester.

Not necessarily involving strictly coding but anything related to software engineering.

Thanks.

I am similar there and honestly it's no big problem. keep yourself up to date with magazines and articles online. get a few books as reference for different topics and get an idea what to find where in them.

computer Science is a giant field. on the job you will only need a specific subset of all your skills. refresh or learn those whenever the job requires it. I just switched jobs and have to get into Web Development again and learning go right now for example.
 

JeTmAn81

Member
So checking out that lectures on algorithms class on coursera even though it ended. And damn nothing is making sense. I'm also looking through stuff like cracking the coding interview and all those algorithm challenges sites and they still confuse the hell out of me.

Like I have no idea how to wrap my head around creating and coming up with any algorithims. Like I could stare at a hackerrank, leetcode, coderbyte etc question forever and still have no clue what to to type in. Yet if I saw a solution I could probably break it down and figure out what it does but not have a clue as to how to come up with the answer or why one solution is better than the other.

Basically this is where I feel being self taught is a weakness. Have not found anything that could help me understand theory and algorithms on my own. I'm attempting to study since I have a phone interview next week with a company I really wanna work for but I'm just drawing blanks every single time I study or try to answer an algorithm question :(.

Starting with the most elementary example you can think of, what is it that you're finding confusing? I'm assuming you checked out the Princeton Algorithms course with Robert Sedgewick.

Concerning algorithm creation, in the vast majority of jobs you're not required to do anything more than apply your toolset of established algorithms to a new problem. Basically almost nobody really invents anything new in terms of algorithms unless they're high-level researchers or working in important positions at very large companies.
 
Hey long time developers! I'm about to move from internship to full-time developer at the company I work at. Yesterday my boss talked to me about how when customers start paying for the projects I work on I will basically have time to "fix problems," not "fix problems as they should be fixed."

Do y'all have any tips about working faster but keeping the quality bar up? Is it merely practice? Time?
 

Nelo Ice

Banned
Starting with the most elementary example you can think of, what is it that you're finding confusing? I'm assuming you checked out the Princeton Algorithms course with Robert Sedgewick.

Concerning algorithm creation, in the vast majority of jobs you're not required to do anything more than apply your toolset of established algorithms to a new problem. Basically almost nobody really invents anything new in terms of algorithms unless they're high-level researchers or working in important positions at very large companies.

Something like this for example.
https://coderbyte.com/information/Letter Changes

I'll stare it at like yep I don't even know what to start typing in ><. Then I look at answers like this and go yep I did not think to even start with any of that.
https://github.com/ratracegrad/coderbyte-Beginner/blob/master/Letter Changes

It's like I feel like I'm competent when learning the concepts and building a project out. Things start to click when I'm learning how to build projects and I end up learning concepts I wasn't even expecting.

Like I recently learned some angular and even my friend who was the teacher of the course said I was picking up on things really quickly. But he also mentioned there were some fundamental JS jconcepts that were difficult for me to understand because I didn't have a full grasp of the language itself. Like he had to explain me to the difference between a factory and a constructor function. And right now I'm learning React and the course I'm taking makes sense for the most part and I'm already thinking of how I could rebuild the course app in angular. I can see the parallels and how things would be different.

Yet when I'm asked to answer algorithms and I'm expected to come up with an answer I just blank out and have no idea where to begin.
 

JesseZao

Member
Excel can diaf.

Part of my job is maintaining excel vba dashboards. I saw some strange data on one and went to investigate. Apparently, 180 - 3*60 != 0, but 1E-08...

I never thought I'd need an epsilon in excel comparisons. >:|
 

JeTmAn81

Member
Something like this for example.
https://coderbyte.com/information/Letter Changes

I'll stare it at like yep I don't even know what to start typing in ><. Then I look at answers like this and go yep I did not think to even start with any of that.
https://github.com/ratracegrad/coderbyte-Beginner/blob/master/Letter Changes

It's like I feel like I'm competent when learning the concepts and building a project out. Things start to click when I'm learning how to build projects and I end up learning concepts I wasn't even expecting.

Like I recently learned some angular and even my friend who was the teacher of the course said I was picking up on things really quickly. But he also mentioned there were some fundamental JS jconcepts that were difficult for me to understand because I didn't have a full grasp of the language itself. Like he had to explain me to the difference between a factory and a constructor function. And right now I'm learning React and the course I'm taking makes sense for the most part and I'm already thinking of how I could rebuild the course app in angular. I can see the parallels and how things would be different.

Yet when I'm asked to answer algorithms and I'm expected to come up with an answer I just blank out and have no idea where to begin.

The example you posted doesn't have a lot to do with understanding algorithms. True, there is a basic algorithm they ask you to follow but I would guess your chief problem there is more related to how to implement it. For instance, do you feel you could come up with this pseudo code to describe the problem?

Code:
1.  Step through each character in the given string.  For each character:
         A.  Is this character a letter?
                    i.  If yes, find the next letter in the alphabet.  Is this character a vowel?
                             a.  If yes, replace the current character in the string with the capitalized version of the new character.
                             b.  If no, replace the current character in the string with the lowercase version of the new character
                    ii.  If no, keep going through the string

That is the algorithm. It is independent of implementation details. There are various ways of carrying out each step in any language you pick. As you can see, it's just a series of steps to follow. Most people would be able to follow this algorithm by hand and carry it out on paper without issue. The implementation is about making the computer understand how to do it.

Anyway, that kind of thing isn't really what's typically discussed when learning about "algorithms". Usually you're learning about different types of sorting and searching through strings and other data types, learning algorithms which apply very broadly as opposed to the one in that small example which is very specific to the problem.
 
Excel can diaf.

Part of my job is maintaining excel vba dashboards. I saw some strange data on one and went to investigate. Apparently, 180 - 3*60 != 0, but 1E-08...

I never thought I'd need an epsilon in excel comparisons. >:|

Fucking love excel. Sometimes i want to leave my amazing job working on compiler tools so i can go get a job making outrageously complex excel spreadsheets
 

JesseZao

Member
Fucking love excel. Sometimes i want to leave my amazing job working on compiler tools so i can go get a job making outrageously complex excel spreadsheets

You're a strange one.

Another part of my job is replacing the dashboards with .net apps. I feel no remorse when a spreadsheet dies.

As an aside, I was very surprised how much my company relies on excel when I started. They crunch so much data and it pains me how little they utilize databases. They love their macros and pivot tables. Everything I make needs to allow csv downloads.
 

Nelo Ice

Banned
The example you posted doesn't have a lot to do with understanding algorithms. True, there is a basic algorithm they ask you to follow but I would guess your chief problem there is more related to how to implement it. For instance, do you feel you could come up with this pseudo code to describe the problem?

Code:
1.  Step through each character in the given string.  For each character:
         A.  Is this character a letter?
                    i.  If yes, find the next letter in the alphabet.  Is this character a vowel?
                             a.  If yes, replace the current character in the string with the capitalized version of the new character.
                             b.  If no, replace the current character in the string with the lowercase version of the new character
                    ii.  If no, keep going through the string

That is the algorithm. It is independent of implementation details. There are various ways of carrying out each step in any language you pick. As you can see, it's just a series of steps to follow. Most people would be able to follow this algorithm by hand and carry it out on paper without issue. The implementation is about making the computer understand how to do it.

Anyway, that kind of thing isn't really what's typically discussed when learning about "algorithms". Usually you're learning about different types of sorting and searching through strings and other data types, learning algorithms which apply very broadly as opposed to the one in that small example which is very specific to the problem.

Ahh ic. Yeah I think I might be able to come up with the steps to solving the problem but yeah implementing it is giving me fits. Thanks for responding btw, attempting to study and solve any of of those problems always makes me feel so stupid.
 

Koren

Member
Part of my job is maintaining excel vba dashboards. I saw some strange data on one and went to investigate. Apparently, 180 - 3*60 != 0, but 1E-08...

I never thought I'd need an epsilon in excel comparisons. >:|
Excel need as much epsilon management as any programming language (and it's fine, I'd say), and that won't change till we get decimal float support (soon, but I dread the way it'll be handled in software like spreadsheets).

But I find your example really, really strange (and I can't reproduce it).

As far as I know, the differences between Excel and vanilla IEEE754 are really minor (mostly, no support for denormalized and infinites). Integers stored in doubles can't give you rounding errors unless it's over 2**52.

Are you sure it's really 180, 3 and 60, and not different float values rounded to those for display?

I mean, it can be perfectly normal and happen in *any* language:
Code:
let x = 180.0 and y = 10.0 *. (0.2 +. 0.1);;
x : float = 180.0
y : float = 3.0

x -. y *. 60.0
- : float = -2.84217094304e-014
(and this is the correct result for any language using IEEE 754 double precision with normal rounding scheme)

I'm sure you can produce a 1e-8. Especially with Excel that round on display depending on the space available.

Also, if I remember correctly, there's a "Precision as displayed" option that use the displayed value for the computations (used to be on the computation tab in options, not sure for recent versions). That may produce greater rounding errors and give results that depends on columns size, but it may also solve your "problem" (even if I don't think that's a correct solution)
 
You're a strange one.

Another part of my job is replacing the dashboards with .net apps. I feel no remorse when a spreadsheet dies.

As an aside, I was very surprised how much my company relies on excel when I started. They crunch so much data and it pains me how little they utilize databases. They love their macros and pivot tables. Everything I make needs to allow csv downloads.

Macros are shit. People just use them because (most of the time) they don't know how to do what they want with pure excel. But it's usually possible.

A well-designed spreadsheet is a work of art.
 

Koren

Member
Macros are shit. People just use them because (most of the time) they don't know how to do what they want with pure excel. But it's usually possible.

A well-designed spreadsheet is a work of art.
I agree, I love what you can do. But it can also be a nightmare to maintain/understand those if you're not the one that created them...
 

MrOogieBoogie

BioShock Infinite is like playing some homeless guy's vivid imagination
I recently enrolled in Harvard's CS50 online course.

I've watched four weeks' worth of lectures (Weeks 0, 1, 2, 3) and completed three problem sets (PSETs 0, 1, 2).

I have never taken a programming class before this one. Never delved into any computer science.

Some thoughts: The course can be INCREDIBLY challenging and demanding, especially for a newbie. Solving the Mario pyramid problem was such a foreign concept to me at first. Most recently, the cryptography problems that force you to do ASCII math just completely went over my head. However, with some guidance from online sources, I managed to get on the right track. I'm still spending upwards of 20 hours on these assignments, however.

The amount of new concepts this course forces you to learn in such a relatively short time is way more than I was anticipating. At first, I wasn't comfortable with even the printf() function. Now, I've delved into arrays, command-line arguments, nested loops, etc. When I'm trying to solve a problem set, like certain conditions in the cryptography homework, I notice that I'm so overwhelmed by approaching the questions in all of the variety of ways that you can in programming that I forget to just take the problem slowly, one step at a time.

It's been an incredible learning experience so far, and I feel vastly more knowledgeable about the field. Apparently, the difficulty only ramps up, so if I've struggled this much and it's only been a few weeks, I am both excited and dreading future lessons.

Anyone else ever take this course and want to share his/her experiences?
 
aw man. Editing your tag is shameful, cpp.

My tag is the same, right? If you mean the post it links to, i had to edit because it was derailing too many threads. I could be like "hey guys 2+2=4" and get responses like "guys click his tag, not taking this piece of shit seriously" (hyperbole, but you get the idea).
 
Man, it's very hard to find good online resources about certain aspects of GIS. I get about 80 million positional updates per day per data source (2-3 sources), that I need to persist to the database in as near a real time manner as possible. The problem I keep running into is that if I put an index on the table the upserts/merges take too long, but without the indexes bounding box queries take too long to be performant on the UX side. I've been trying to investigate if there are any good in-memory geospatial caching mechanisms that allow for BBox querying but can't find much. Anyone know how elasticsearch and solr perform? At this point I am thinking of switching another aspect of our application to solr anyway, and I am curious if just building and rebuilding geospatial indexes on an interval in solr would be performant.

I wish my boss would just let me throw more hardware at problems sometimes, but high end SQL Server instance are too expensive.
 
Man, it's very hard to find good online resources about certain aspects of GIS. I get about 80 million positional updates per day per data source (2-3 sources), that I need to persist to the database in as near a real time manner as possible. The problem I keep running into is that if I put an index on the table the upserts/merges take too long, but without the indexes bounding box queries take too long to be performant on the UX side. I've been trying to investigate if there are any good in-memory geospatial caching mechanisms that allow for BBox querying but can't find much. Anyone know how elasticsearch and solr perform? At this point I am thinking of switching another aspect of our application to solr anyway, and I am curious if just building and rebuilding geospatial indexes on an interval in solr would be performant.

I wish my boss would just let me throw more hardware at problems sometimes, but high end SQL Server instance are too expensive.

Try adding another column called "Region" or something. Imagine you break the world into a set of tiles, each tile is a region. When you do an update you store the region. You don't index the x and y coordinates (or latitude and longitude), just the region. Bounding box queries then become region based queries. You only have to maintain 1 index now instead of many, and the index has fewer possible values, so it should be faster to update.

Unless I misunderstood your problem.
 
Try adding another column called "Region" or something. Imagine you break the world into a set of tiles, each tile is a region. When you do an update you store the region. You don't index the x and y coordinates (or latitude and longitude), just the region. Bounding box queries then become region based queries. You only have to maintain 1 index now instead of many, and the index has fewer possible values, so it should be faster to update.

Unless I misunderstood your problem.

I've thought about a region based approach, but there's not really a huge difference between that and grid tessellation and grid levels in SQL Server spatial indexes, and that's automated for you. The issue I have is that with broad indexes queries still take long, but with focused indexes updates take too long. I'm just going to tweak the indexing some more and see if I can get some satisfactory results. I just worry about how it will scale. 10-50 users is one thing, but 1000+ users will be an entirely different story. Especially considering how expensive it gets when you need read replicas with SQL Server. Right now we're running the web edition, since we're not at the scale where we need replication yet, but boy will it be fun when we are. We need to move to postgres.
 
I've thought about a region based approach, but there's not really a huge difference between that and grid tessellation and grid levels in SQL Server spatial indexes, and that's automated for you. The issue I have is that with broad indexes queries still take long, but with focused indexes updates take too long.

Can you batch your updates and use array-based updates? Making fewer round trips to the db can give huge performance gains. I haven't done this with SQL Server, but I did some Oracle work with OCCI (Oracle C++ Interface), and what you would do is create a query like this:

{CALL UpsertPosition(id, x, y, z);}

And then you would make a prepared statement out of it, and bind an arbitrarily large number of sets of parameters to the same statement. Then when you execute it, it does all of them at once. This can be a huge win because internally it can acquire locks on the table and indices only once instead of once for each query.
 
Can you batch your updates and use array-based updates? Making fewer round trips to the db can give huge performance gains. I haven't done this with SQL Server, but I did some Oracle work with OCCI (Oracle C++ Interface), and what you would do is create a query like this:

{CALL UpsertPosition(id, x, y, z);}

And then you would make a prepared statement out of it, and bind an arbitrarily large number of sets of parameters to the same statement. Then when you execute it, it does all of them at once. This can be a huge win because internally it can acquire locks on the table and indices only once instead of once for each query.

Yeah, it's already batched. We do batches of 5000 currently, which runs in about 700ms or so average on our m4.large Amazon RDS instance. It's about .09-.14ms per upsert. It's actually pretty quick, but that's without the spatial index. I have to benchmark again with the spatial index. It's been a while since I did it.

As a side note, we also keep a history of the positions. We use Cassandra. That same batch takes 20ms in Cassandra. I <3 Cassandra. It's insane how easy position histories are compared to indexing current positions.

To give you an idea of client side performance: a bbox query around Shanghai, returning about 15,000 positions, takes about 375ms to fulfill. That's running locally, so excluding most network latency. Assuming my Amazon instances are properly regionalized you can expect 50-100ms more latency. That's not unacceptable, but I worry about scaling to more than the 10-20 users we've historically dealt with to web-scale.
 

Nowise10

Member
I'm in freshmen year computer science for C++. Can someone please help with with this simple process?

In the "game" I'm making, the user needs to input a number as the maximum value, and the program will take that number and generate a random number between 10 and number you entered. The generated number is then stored in "int PileSize;"

Now I need the PileSize integer value to be accessed from any function, but I really don't seem to understand how to make it work. Can anyone help?
 

Mr.Mike

Member
I'm in freshmen year computer science for C++. Can someone please help with with this simple process?

In the "game" I'm making, the user needs to input a number as the maximum value, and the program will take that number and generate a random number between 10 and number you entered. The generated number is then stored in "int PileSize;"

Now I need the PileSize integer value to be accessed from any function, but I really don't seem to understand how to make it work. Can anyone help?

The most sensible way to do so would be to pass it as a parameter to whatever functions need it. Another option is to have the PileSize value be in the global scope so that it can be accessed from anywhere in your program, not just in the scope of a function. You can achieve this by declaring the PileSize variable outside of any function, including your main function. It's considered good practice to avoid using global variables as much as possible, but it sounds like the assignment wants you to use PileSize as a global variable.
 
Man if I don't have good project to work on, I feel like I regress... programming sucks in this way.

Have you considered to contributing to open source projects? Creating something on your own is a great escape hatch if you are starting to get jaded, but sometimes it's hard to figure out something you want to do; while it's not trivial to find an OSS project you want to contribute to, there's hundreds or thousands or hundreds of thousands of projects out there that could use your help
 
What does the job market look like in Denver, Seattle and Portland? I just started working last year and my current job is very unsatisfying. I'm looking to move to a new city but obviously won't have any connections there. I'm looking to move at the beginning of next year, so I'm trying to spend this year preparing. I'll have 2 years of professional experience at that point. I also have a CompSci BS. Any advice would be greatly appreciated!
 

zeemumu

Member
I'm guessing that I did okay on that final because I ended up with a B- in the class. Thank you to everyone who offered input on Big O Notation.
 
If you ever wonder why people look down on Javascript "programmers" read this:

https://news.ycombinator.com/item?id=11348798

At least it goes on to show that tons of "professional" "programmers" are arrogant (and ignorant) assholes if nothing else.

This was so stupid. Yes npm modules are nice "lego blocks" but dependencies also come with trust that a module is going to do the right and not change and that the ancestor modules they use are also doing the right thing. Hopefully a good lesson to learn to the Node community.

Trust that left-pad had, that did the right thing, that did not change and the ancestor modules they use did the right thing. If Kik had published it's new module as Kik, all the existing users and modules that depended on "kik" at the time of the release would have worked as expected if Azer had not thrown his temper tantrum. Did he have right to throw it? Most likely yes. Was it incredibly stupid and reckless to do without any warning to the Node community? Absolutely.

As for the lessons to be learned, NPM addresses them in its blog post.
 
At least it goes on to show that tons of "professional" "programmers" are arrogant (and ignorant) assholes if nothing else.



Trust that left-pad had, that did the right thing, that did not change and the ancestor modules they use did the right thing. If Kik had published it's new module as Kik, all the existing users and modules that depended on "kik" at the time of the release would have worked as expected if Azer had not thrown his temper tantrum. Did he have right to throw it? Most likely yes. Was it incredibly stupid and reckless to do without any warning to the Node community? Absolutely.

As for the lessons to be learned, NPM addresses them in its blog post.

Incidentally, those lessons don't seem to include "learn how to be a decent software engineer and use packages appropriately"
 

Makai

Member
Code:
return toString.call(arr) == '[object Array]';

Wow, I had no idea there were popular packages this small. Probably the right approach if you take OO to its natural conclusion.
 
Top Bottom