• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Looking for some help for FXML.

Basically I want to scene switch on the same stage. I followed this tutorial, but it's throwing exceptions.

I tried the default empty xml page in neatbean, didn't load, copy pasted my other fxml form onto my new one, it works (I can tell because it executed the stage.setTitle() code I put in the if condition). So I deleted a label with no fx:action assigned to it, and the form loads. I can add new buttons as well. But if I delete any nodes that are assigned to the controller, it doesn't want to load the page anymore at all, and throws an exception.

I've already finished the program in just regular FX, the second part of the assignment wants it in fxml. No gui builders yet.

edit: Well fuck, think I just figured it out in my head. I think whenever I start up the second stage it tries to run the initialize for the new stage as well, and I do operations on some XML elements that don't exist now. I think I have a solution.

Yeah that fixed it, damn this thing is giving me a headache. Took me awhile to figure out the default value of combo boxes isn't set to the promptext, I keep getting null errors :(.
 

Makai

Member
- A lot of people have a hard time reasoning about some fairly central functional programming elements like recursion and folding. They don't conceptualize the universe that way. People like OO languages because most people conceptualize the universe as a collection of objects interacting with each other in their everyday life.
I am definitely inclined to think in an OO way, but LINQ has prepared me for FP. I will probably switch to FP full-time if I can figure out how to structure a large-scale project in an FP language. I've long detested recursion, but I'm willing to deal with it because the benefits seem monumental. I have worked on too many software projects where I am almost unable to be productive because the original team badly neglected OO principles - extreme coupling, no evident structure, unpredictable side effects. It's easy for me to say, "this project would have been successful if the team followed OO principles," but FP is making me question that. Are OO design guidelines and patterns just "band-aids" for a flawed paradigm? This was meant to be humorous, but I am unable to argue against it:

fpvsoo.jpg
 
I know one major advantage of functional programming is the ability to more easily rationalize about complexity. At Least the statically typed ml-like languages. But it seems much harder for people to implement all the little details. OO is easier to encapsulate some component then throw it up into a system of components. You get to think about the world as objects, not as some mathematical structure that represents some lambda calculation.
There is a very mechanical way to represent OO methods and datatypes in functional programming languages.

The first observation is that any method on an object of the form
Code:
object.method(args, args, args);
can always be translated into a function
Code:
method(object, args, args, args);
and in fact this is what is happening anyway because an object will always throw itself on the stack as an implicit "this" argument. In fact, in the Rust programming language, methods on objects have to be declared with that very type signature:
Code:
fn method(&self, foo: String, bar: String)
It is far more honest.

The second observation is that any method which must mutate the object in question simply returns a new object. So some method
Code:
Date.set_time(seconds)
should be represented as a function
Code:
Date set_time(Date, int)
. For those of you that are concerned this is slow: 1) What are you doing that needs to be lightning fast? Linear algebra with thousands of dimensions? Signal processing on an embedded microcontroller? How slow do you think this is? Did you time it? Show me the benchmarks. 2) Are you using a garbage collected language? Then you have already sacrificed a huge amount of performance and real time predictability. If you need it to be fast, you should be writing it in C++ or Rust as a maximum level of abstraction. C if you have space constraints. Asm if you are counting clock cycles and writing a cyclic executor for your pacemaker. 3) You have absolutely no idea what kinds of transformations your compiler is making to your AST if you are using a high level language. In fact, if you are using LLVM, your compiler is already transforming your mutable code into a low level functional programming language by converting it to Single, Static Assignment (SSA) form, which means that all bindings are immutable. This form is MUCH easier to analyze and allows for a huge amount of optimizations. Unnecessary copies are stripped by the analysis and variables are made mutable again as necessary to reuse space or avoid cache misses or whatever. So mutability is not for performance, it is for the semantics of identity tied to changing state. 4) If you wanted mutability for global state then you should know now that that is not thread safe in general, and imo if it is not thread safe, then it should not be allowed in a language. I.e. data races should be a compiler error (see: Rust).

Now, what did you gain by making your function pure? Because it only relies on its arguments, you need only look at its definition to determine if it is correct or not. This local logic might even apply to every function in your code base. And if every single function is pure, then you only need unit testing to test your entire codebase, thus completely invalidating the need for integration testing... because there is nothing to integrate. You merely test every function against its arguments. No set-up / tear-down of fixtures, no dependency injection. The only integration you can do is function composition.

"But what about side effects?"

Because all of your effectful computations are encapsulated in types, you end up putting all of your effects in the same layer of code. And because it's all in a single layer, you only need to inspect that layer to make sure the effects are what you wanted.

A lot of the functional languages have been half-baked the past decade, or stuck as academic languages. You have to remember that C success is directly related to Unix being popular. Java was pushed big time through marketing and sales to corporations. Universities basically pumped out generations of programmers who were initially inclined to certain flavors. There has yet to be functional language that really rises up to own a domain.
If SML is half-baked, then Javascript is a raw slab of the poorest slice of meat, and PHP is not even edible.

- Functional languages do not map very directly to microprocessor architectures (outside of a few experimental dataflow processors). It's difficult for a programmer to reason about low level performance, memory, and power issues. You need a concept of state to interact with hardware.
No, I think this is mostly wrong.

If you are talking about purely functional, lazy languages like Haskell (and... I can't think of any other ones), then yes, you are correct. I have spent some time in the haskell runtime source and I can tell you that there is a lot of stuff the average programmer will use but not realize is not especially fast, such as dynamic closure creation, thunk hell, etc. Laziness can bite your performance hard (you are constantly tracking in your head, for instance, whether something has been evaluated all the way or only to weak head normal form). But I don't think anybody has ever suggested using Haskell in your smart-wear or pager or whatever so I don't know why you mentioned power.

But even if it's harder to analyze the performance, it's still possible, and the rules are quite regular. And boy, does GHC perform. The language-shootout benchmarks are not by any means the gold standard in benchmarking performance but they are a good rough estimate and compared to every other language: A) GHC far outperforms the interpreted languages like Ruby and Python, as expected, B) GHC is faster than other garbage collected, compiled languages like C#/F# in most cases, and C) GHC is quite consistently within 2-3x the performance of Java and also non-garbage collected collected languages like C and C++. So if the performance is hard to analyze, it sure has not stopped people from writing fast programs.

In any case, for other languages, you're totally wrong. A function call is throwing stuff on the stack and a jump to a function, whose next instructions are (probably) to consume bytes off the stack in reverse order. You return things by putting them on the stack and jumping back to the return address. This evaluation model persists across all languages and strict, eager functional languages like F# and Scala behave in this predictable way, too. The compositional and data patterns are usually vastly different between imperative languages and functional languages but to say that the performance is hard to analyze when the greatest predictors of language performance are data-structure, algorithm, and runtime choice is a completely disingenuous statement.

- A lot of people have a hard time reasoning about some fairly central functional programming elements like recursion and folding. They don't conceptualize the universe that way. People like OO languages because most people conceptualize the universe as a collection of objects interacting with each other in their everyday life.
Totally agree about recursion. That shit is hard. I would much rather use a loop, any day. Folding, absolutely not. Folding encapsulates all of looping/accumulator patterns. (And if you want to get fancier, you can talk about catamorphisms over F-algebras).

I have never met anyone who thinks in Abstract Factory, Decorator, or Adaptor patterns "naturally" without having self-flagellated until the ideas were ingrained. The problem is that OO in the large is not structured enough, not mathematical enough, not rigorous enough to solve the most complicated software projects and keep things comprehensible. It is not composable. Programming is the process of defining datatypes and defining operations over those datatypes, and to masquerade it as some imaginary universe of objects sending messages to each other ill conceived. Object oriented programming was adopted by programmers because it offered 1) ad-hoc polymorphism, 2) encapsulation, and 3) code reuse in ways that were painful or impossible in C, but the history of programming languages has evolved and I think it's obvious now that there are other more principled ways to achieve those things.
Edsgar Dijkstra said:
Object-oriented programming is an exceptionally bad idea which could only have originated in California
- Most of the modern languages are hybridized anyway. You can have persistent state in Scala, anonymous and first-class functions in C++, etc. So I think it comes down to whether a functional language really offers that much more than the functional programming features of imperative languages, considering the support that imperative languages have in terms of existing toolchain, libraries, knowledge base, and so forth.
Bondage and discipline. When your language forces you to abandon exceptions, mutability, global state, inheritance, unbridled effects, etc., you learn more principled, composable ways of doing things.

Are OO design guidelines and patterns just "band-aids" for a flawed paradigm?
Yes :) now hurry up and watch the talk where that slide comes from
 
need some C++ help

#include <iostream>
using namespace std;

int main()
{
int currentage, ageinfiveyears;

cout << "Enter your current age:";
cin >> currentage;

cout << "In five years you will be:" <<ageinfiveyears;

ageinfiveyears = currentage+5;

system("pause");
return 0;
}


whatever number i input i get out something crazy like 26867..
 

Haly

One day I realized that sadness is just another word for not enough coffee.
Code:
ageinfiveyears = currentage+5;

cout << "In five years you will be:" <<ageinfiveyears;
You need to initialize (give a variable a value) before you use it. When you don't, the computer will read in whatever junk is in that memory location (which it saved when you declared the variable) and you'll get something completely off.

How are you compiling C++? Depending on your method you might want to turn on the strict settings so stuff like this gets caught.
 
You're outputting ageinfiveyears before setting the value. Probably getting the memory address or junk data at that memory location?

oh shit i forgot you had to assign the ints lol

Code:
ageinfiveyears = currentage+5;

cout << "In five years you will be:" <<ageinfiveyears;
You need to initialize (give a variable a value) before you use it. When you don't, the computer will read in whatever junk is in that memory location (which it saved when you declared the variable) and you'll get something completely off.

How are you compiling C++? Depending on your method you might want to turn on the strict settings so stuff like this gets caught.

im using dev++

ehh, im still confused actually.. because i mean the current age to be assigned by the user..
 
this one works just fine and as far as i can tell its the same kind of process as the above..

#include <iostream>
using namespace std;

int main()

{
double C, F;

cout << "Enter a Celsius value" << endl;
cin >> C;

F = (C*1.8)+32;
cout << "Fahrenheit temperature is:" << F;

system("PAUSE");
return 0;
}
 

Haly

One day I realized that sadness is just another word for not enough coffee.
It outputs F after it makes the calculation and assigns the value to the variable.

You're trying to output ageinfiveyears before giving it a value.
 

Trident

Loaded With Aspartame
oh shit i forgot you had to assign the ints lol



im using dev++

ehh, im still confused actually.. because i mean the current age to be assigned by the user..


What you have:

cout << "In five years you will be:" <<ageinfiveyears;

ageinfiveyears = currentage+5;

What you need:

ageinfiveyears = currentage+5;

cout << "In five years you will be:" <<ageinfiveyears;

In your current version, you're outputting ageinfiveyears before defining it as currentage+5.
 
It outputs F after it makes the calculation and assigns the value to the variable.

You're trying to output ageinfiveyears before giving it a value.

i didn't realize it had to be in line by line order lol wow

well, it works fine now

i got frustrated over nothing lol

thanks guys :p
 

tokkun

Member
1) What are you doing that needs to be lightning fast? Linear algebra with thousands of dimensions? Signal processing on an embedded microcontroller? How slow do you think this is? Did you time it? Show me the benchmarks.

I've worked on a lot of different performance-intensive projects - triggering systems for the LHC, electronic design automation, distributed filesystems, userspace networking, and cloud services.

My take on functional languages is not that they perform poorly, but rather that they are designed to perform well specifically in throughput-oriented applications. They are a good choice for some performance-intensive applications like offline analytics. I don't think they are good for latency or memory-centric applications. I have worked on real-time systems with deadlines measured in microseconds. I'm currently working on a multi-threaded server with a memory budget of 4 MB. It's true that a lot of procedural languages can't cope with those constraints either, but functional languages generally have a more difficult time of it by making it harder to reason about when things are evaluated and whether data is being mutated or copied.

The compositional and data patterns are usually vastly different between imperative languages and functional languages but to say that the performance is hard to analyze when the greatest predictors of language performance are data-structure, algorithm, and runtime choice is a completely disingenuous statement.

Hence why I deliberately said "low level performance" in the text you quoted.
 
I've worked on a lot of different performance-intensive projects - triggering systems for the LHC, electronic design automation, distributed filesystems, userspace networking, and cloud services.

My take on functional languages is not that they perform poorly, but rather that they are designed to perform well specifically in throughput-oriented applications. They are a good choice for some performance-intensive applications like offline analytics. I don't think they are good for latency or memory-centric applications. I have worked on real-time systems with deadlines measured in microseconds. I'm currently working on a multi-threaded server with a memory budget of 4 MB. It's true that a lot of procedural languages can't cope with those constraints either, but functional languages generally have a more difficult time of it by making it harder to reason about when things are evaluated and whether data is being mutated or copied.



Hence why I deliberately said "low level performance" in the text you quoted.
Sorry, misinterpreted your post slightly. Read it as "functional languages are not great when you need predictable, optimum performance" and I thought "uhhh, yeah, but same with every language". The use cases you listed are niche compared to the majority of use cases of programming. Real time constraints are not unbiquitous. If you are programming something with real time constraints, then you have already limited yourself to 2 or 3 languages, anyway.

Yeah, functional languages don't do well when memory is an issue. Ghc generates a fair bit more garbage than other runtimes, I've noticed.
 

Water

Member
oh shit i forgot you had to assign the ints lol
You should have compiler warnings turned on at all times. I'm pretty sure that any of the modern compilers should be able to warn you about the unassigned variable in a simple case such as this. Everybody forgets and makes mistakes, but professionals habitually take proper advantage of their tools and code defensively, so most mistakes are caught automatically and only take a moment so solve instead of becoming actual problems.
 
Maybe someone can help me with this simple mistake So I want to have text after the variable but I keep getting an error saying that I need a ; before endl

cout << "You want " << shares << "shares of stock" endl;
 

RustyO

Member
Have you considered porting your application to QT?

I wouldn't bother with C++, it's not a good cross platform solution and it buys you almost nothing. I'd either use some existing way to run or convert the app (Xamarin), or rebuild it cross platform (web).

If your clients want to run it on OS X, just bite the bullet and learn Objective-C. Swift only extends the Objective-C runtime, so many of the same features/problems still exist on Le Swift. And using the native OS X UI bits, you'll run into less development friction that way.

You could look into bridging the Obj-C UI with C# or a similar shared data layer. But for most things, I think a re-write will save you a lot of trouble. Or as someone else mentioned, this might be a good time to look into modern JavaScript... or dare I say, Chrome Apps (which are really a form of browser extensions).

Thank you for the recommendations / advice, much appreciated.

I hadn't looked into QT / Xaramin at all before. The program is very GUI heavy, all custom controls, lots of LINQ, so dread having to deal with porting those over; and didn't get a favorable impression based on what I saw / read about the UI tools available et al. Never really dealt with JavaScript, and hate web stuff with a passion. (It's me)

Having had a bit of a think / play around with it all, and in the meantime coming up with a better solution for my initial issues (but not native OSX); that still requires a decent bit of a work, I think I'm going to stick with a C# / Wine scenario.

And I'd rather focus energy on C++ / Juce then JavaScript or anything else as that is where my core future training/development/interest lies.. Synths / DSP / Audio programming et al.

Thank you all again for the advice, much appreciated, and has given me some new stuff to play around with / investigate. Just hard to justify the change based on user base/count, work involved etc. Maybe something I will revisit in the future though.
 
I want to start some embedded coding on a Raspberry Pi for fun, but would it be better to keep Raspbian or should I install Ubuntu on it instead?
 

Mexen

Member
And I'm an idiot. I should have come here first but here's my thread on advice I need for my project. Please be gentle, senpais.

notice me

Here's the OP anyway

I will try to keep this as short as possible.
For the next few months, I'll be working on my
project for the CS department/faculty at the
university I go to before graduating(think of it as
a very, very watered down learning analytics
system).
I have a few ideas on how to go about it but I
need the wisdom of more experienced coders
before I proceed.
Here are the project requirements.
- The system should allow a lecturer to set
quizzes, review students' submissions, and make
predictions and recommendations based on their
performance which can be viewed as reports or
summaries.
-Students should be able to take set quizzes,
submit their details and answers. Students should
also be send queries or concerns to the lecturer.
There should be a provision for being able to
access/review old quizzes.
- The system should be mobile friendly, i.e,
accessible via smart phones in a neat but
functional format.
What I would like is advice on the following:
- The tools I'll need (IDEs etc)
- The languages best suited for such a project (I
know Java, C++, Pascal, HTML, XML, CSS and
SQL, but I'm always willing to learn what I have
to)
- General information such as "you're gonna need
a CMS, buddy because to achieve X requires it" or
"hm, part Y will require queries" is also welcome
as it will shed more light.
Help me get my degree, GAF!
 
What is the best way of learning Big O notation? We're learning in our Data Structures course, but all of it seems very perplex, which is due to the pacing of the course.
 
What is the best way of learning Big O notation? We're learning in our Data Structures course, but all of it seems very perplex, which is due to the pacing of the course.

Big O notation itself is very simple. It's a way to express the time complexity of a function (it can also be used for memory complexity but we usually use it for time complexity). So, what does that mean? Essentially, it's a way to determine how long a algorithm is going to take to execute. Let's say you've written a sorting function that sorts an array in ascending order. You execute it on an array that's 100 elements long and it finishes in 2 milliseconds. "Nice!", you think. "It works! And it's fast too! 2 ms! That's nothing!"

Then you use it on an array with 100000 elements in it. And then you wait. And you wait. And you wait. It's still not finished. The question is: is it worth waiting for? Maybe it's just about to finish. Maybe it's going to take an hour. Or a day. Or a year. How do we know?

We know by looking at the time complexity!

You're using Bubble Sort to sort the array. Bubble sort has a best case complexity of O(n) and a worst case complexity of O(n²), where n is the amount of elements to sort.

If we assume the array we sorted is the worst possible one, we have O(n²). This means that the time the algorithm should take to execute is: t = n²x where x is some value depending on how fast your computer can execute code. So, we know 100 elements takes 2ms. That means x = t / n² = 2 / 100² ms = 0.0002 ms.

So, how long will 100000 elements take? t = n²x = 100000² * 0.0002 = 33.333 minutes. Worst case scenario, it'll take us half an hour.

We don't have time to wait that long, so we rewrite our algorithm to be a better one. This time it has a worst case complexity of O(n log n). How long will it takes us now?

100000 * log 100000 * 0.0002 = 332.2 ms = 0.3 seconds. That's a bit better. And just by using a better algorithm. We would have to get a computer that was 100 times as fast in order to use the old algorithm at the same speed. And if we wanted to sort even more elements we'd need an even faster computer.

So algorithmic complexity is very important. A better computer can't compensate for a poor algorithm.

Big O notation is very handy for calculating how long something is going to take.

But that's if we already have the Big O notation of an algorithm already given to us. How do we go about calculating it for an algorithm we've written? Well, that's when it starts to get a bit more difficult.

Let's have a look at a simple algorithm. This algorithm goes through all the numbers in an array. It then checks if any other number in the array matches that number and if so: prints out that we've got a match.
Code:
for (int i=0; i<n; i++) {
    for (int j=0; j<n; j++) {
        if (i != j && array[i] == array[j]) {
            cout << "Element " << i << " has a match!" << endl;
            break;
        }
    }
}

So, let's try to determine the time complexity for this algorithm. There are two things we are interested in: the best case scenario and the worst case scenario.

Let's start with the best case scenario. What's the best case scenario? Well, the best case scenario is the input that the algorithm will be able to handle the fastest. So, what's that?

Well, we've got two loops, one inside the other. The outer loop will always go from 0 to n-1, regardless of what the contents of the array are. For each such loop, it's going to go through the inner loop. Now, what's the best case scenario for the inner loop?

Well, the inner loop quits as soon as it finds a match. So the best case scenario is if it immediately finds a match every time. When does that happen? It happens when every element of the array is the same. Then all of the elements will match and so every element matches array[0], so there's no need to check the rest of the array. So, in the best case scenario, the inner loop will only check 1 element.

Now we've got all we need to figure out the complexity of the algorithm. The outer loop has n, and the inner loop has 1. In total we get n*1 = n.

The best case scenario has complexity: O(n).


But what about the worst case scenario? When's that? Well, the outer loop still always goes from 0..n-1, so that's the same. But the inner loop could potentially have to go through all of the elements all the way to n-1 if there aren't any matches whatsoever. If not a single element matches another, the algorithm will have to go through all of them in the inner loop.

So the other loop has n, the inner loop has n. In total we get n*n = n².

The worst case scenario has complexity: O(n²).

And that's really all there is to it. Of course, in many cases it can be a lot harder than this. Determining what's the best case scenario and what's the worst case scenario isn't always so easy. But the principle is the same.

There's one more thing to think about. In Big O notation, we only care about the "worst offender". If we calculate a complexity that's n² + n/2, all we care about is n², because for very large n:s, the other terms essentially disappear in comparison with how large n² is. So the Big O notation for n² + n/2 is O(n²).

We also don't care about constants like 5n, that just becomes O(n). The constant is insignificant when n is really large.

The holy grail of complexity is O(1), meaning the algorithm takes constant time. It takes the same amount of time regardless of how many elements we throw at it.


Ok, so with all that knowledge we should, after a lot of practice, be able to get the best and worst case scenario for any algorithm. There's only one problem with that. Our data is very seldomly ever going to be the absolutely worst case possible or the absolutely best case possible. What we'd really be interested in is some sort of average case. How long it will typically take. Not worst case, not best case, but for most cases. This measurement exists as well, but it is way harder to determine. You professor won't expect you to be able to determine that.

Usually, we're not so interested in the best case scenario. We usually want to minimize the worst case and the average case.
 

wolfmat

Confirmed Asshole
And I'm an idiot. I should have come here first but here's my thread on advice I need for my project. Please be gentle, senpais.

notice me

Here's the OP anyway
First, you need to design everything.
1. Design how the data structures for users with their details and access rights, questions, answers and quizzes must look like.
2. Design how the data flows in the data flow scenarios you have:
- Making and editing a quiz
- Presenting a quiz
- Answering a quiz
- Evaluation of a quiz
- Computation of a projection based on evaluated quizzes
- Adding and editing a user
- Login
and so forth.
3. Make up the user interfaces needed.

After you're done with your design, you should know what you need to get things working.

Now prototype the design.

After the prototype is done, apply lessons learned to the design; recode the prototype as necessary if you're feeling lucky, or start over from scratch if the design has changed significantly.

You will probably end up programming a web app.
Things you need then:
- A database (maybe MySQL, maybe MongoDB...)
- A language that is good for prototyping (maybe Python?)
- Something that makes building simple frontends easy (maybe Django?)
- An asynchronous backend that computes projections and feeds results back to the DB, sends mails and shit (maybe CeleryD workers coded in Python?)
- A web server (something like Apache?)
- If you're going for medium scale: A queueing system (maybe RabbitMQ?) and a couple of cheap VMs that have the CeleryD workers running

I think you shouldn't choose prematurely though. Roughly outline a design first. It's always the first thing to do to understand the scope of the project.
 

Mr.Mike

Member
So, how is the average case time complexity determined?

Is it something along the lines of averaging every possible case, or is it done empirically? ( It's probably safe to assume that "cases" won't be evenly distributed, so an experimental approach seems like it might make sense)
 

squidyj

Member
What is the best way of learning Big O notation? We're learning in our Data Structures course, but all of it seems very perplex, which is due to the pacing of the course.

simply put it's simply an upper bound on the growth of execution time with respect to some value N. for example the N can be the number of elements in an array as in a sorting algorithm. Or it might be the depth of a tree. Or it might be the value of an integer. The point is as N varies, the worst-case execution time of the program varies as some simplified function of N.

It's important to note that in this instance we really only care for what the fastest growing term of the problem is, which is what O(N) is, it's a set that includes 2N+5, 5N+2 etc etc, they are all elements of O(N), and so on and so forth for O(N^2) etc. We don't really care about the function itself but the class it belongs to determined by it's fastest growing term. (of course this means that an O(N) algorithm might not be better than an O(N^2) algorithm for a given dataset because N remains too small to justify some sort of overhead in the O(N) algorithm)
 
So, how is the average case time complexity determined?

Is it something along the lines of averaging every possible case, or is it done empirically? ( It's probably safe to assume that "cases" won't be evenly distributed, so an experimental seems like it might make sense)

I honestly have no idea how I'd go about it. It's beyond my level.
 

squidyj

Member
So, how is the average case time complexity determined?

Is it something along the lines of averaging every possible case, or is it done empirically? ( It's probably safe to assume that "cases" won't be evenly distributed, so an experimental seems like it might make sense)

I think either you'd want to determine the probability distribution analytically or you'd randomly sample it and integrate
 

Water

Member
Ok, so with all that knowledge we should, after a lot of practice, be able to get the best and worst case scenario for any algorithm. There's only one problem with that. Our data is very seldomly ever going to be the absolutely worst case possible or the absolutely best case possible. What we'd really be interested in is some sort of average case.
Wrong. We are often only concerned about the worst case, and there are also plenty of algorithms where the worst case can occur disproportionately often in real-world use, eg. naive quicksort or insertion sort.
 
Wrong. We are often only concerned about the worst case, and there are also plenty of algorithms where the worst case can occur disproportionately often in real-world use, eg. naive quicksort or insertion sort.

I think I may have expressed myself poorly. I certainly didn't mean to say we aren't interested in worst case or that average case is more interesting. Average case really only tends to be interesting when we have two algorithms with the same worst case to choose from. And best case is almost never useful.
 

wolfmat

Confirmed Asshole
So, how is the average case time complexity determined?

Is it something along the lines of averaging every possible case, or is it done empirically? ( It's probably safe to assume that "cases" won't be evenly distributed, so an experimental approach seems like it might make sense)

The average case time cannot always be determined in a manner that's meaningful in real-world use. But the average case time is also not really of interest for O-considerations.

Anyway. Average case time is interesting most of the time if the processed data is arbitrary.

Usually, it's just "watch what happens". Mostly with real data. Meaning that, yes, you're probably averaging cases to get an idea, but most likely not every possible case -- usually, you can't reasonably predict all possible cases.
[As an aside: If you could compute case time for every possible case, then you can compute the result for every case, and then you can just implement a lookup table for the bad cases, or even a lookup table for ALL cases. (This is actually completely reasonable sometimes.)]

The thing is, you shouldn't really average time of computation because your computer will not run the cases completely isolated. Rather consider counting instructions to predict well. Best case is you count JMPs, CMPs and so on -- hardware instructions --, along with lookup traversals (cache hits / misses ~> RAM ~> VRAM ~> disk ~> net), and optionally average instructions by case count. Sometimes though, all that's interesting is a single instruction count because it's guaranteed to be the bottleneck; often enough, all that's really relevant to predict average computation time is the number of if statements traversed, or the number of XORs evaluated etc.

You should however have an idea about what happens in the average case. You wrote the algorithm after all (or you've already analyzed it). Meaning you're really interested in the frequency of worst cases, their nature (linear / exponential / logarithmic) and their likelihood of occurence. Hence the O notation.
 

Big Chungus

Member
Can someone give me ideas for small projects i can develop for use in a portfolio?

I have a few from school but id like a few more...but im out of ideas.

I have experience using java, c#, html, javascript, php, sql, and c.
 
Can someone give me ideas for small projects i can develop for use in a portfolio?

I have a few from school but id like a few more...but im out of ideas.

I have experience using java, c#, html, javascript, php, sql, and c.

Have you considered contributing to some open source projects?

Or you could do stuff like https://projecteuler.net/.

In my honest opinion, building stuff just for sake of portfolio padding is a waste of time, or at least time that could be better spent elsewhere.
 

Makai

Member
Pretty sure a lot of people find OO easier. I never got to use functional programming in any projects, so my experience is very basic.
What's your opinion on what kind of problems FP solves? I'd love to hear! :)
I'm just starting, but it seems to solve all of the problems with state. I've worked on large scale projects where liberal use of state ended up dooming everything. And that was synchronous code. It will only get worse if CPUs continue to add cores.
 

Kansoku

Member
I'm writing a compiler in C++ for a compiler class, and I'm having a weird problem. I have files A.cpp, A.h, and B.cpp. 'A' is my symbol table, and 'B' is my lexical parser (or whatever that is called in English). If I compile 'A' alone, everything goes well. But when I try to compile them both I get an error:

Code:
A.h:9:9: error: 'unordered_map' in namespace 'std' does not name a type
typedef std::unordered_map<std::string, table_reg>::const_iterator address;

I am compiling with -std=c++11 (tried c++0x as well), but it still don't work. Using the last version of MinGW (4.8.1). It's my first time using header files, so I'm not sure if I did something wrong. Here is the pastebin with the relevant parts: http://pastebin.com/qXc8ygW3

I tried to find something in Google, but most just tell me to use -std=c++11, which I'm already doing. Ah, and I'm using
Code:
g++ -std=c++11 -c A.cpp B.cpp -o test.exe
to compile. Does anyone have any clue?
 
I'm writing a compiler in C++ for a compiler class, and I'm having a weird problem. I have files A.cpp, A.h, and B.cpp. 'A' is my symbol table, and 'B' is my lexical parser (or whatever that is called in English). If I compile 'A' alone, everything goes well. But when I try to compile them both I get an error:

Code:
A.h:9:9: error: 'unordered_map' in namespace 'std' does not name a type
typedef std::unordered_map<std::string, table_reg>::const_iterator address;

I am compiling with -std=c++11 (tried c++0x as well), but it still don't work. Using the last version of MinGW (4.8.1). It's my first time using header files, so I'm not sure if I did something wrong. Here is the pastebin with the relevant parts: http://pastebin.com/qXc8ygW3

I tried to find something in Google, but most just tell me to use -std=c++11, which I'm already doing. Ah, and I'm using
Code:
g++ -std=c++11 -c A.cpp B.cpp -o text.exe
to compile. Does anyone have any clue?

You must provide a definition of all non-pointer types before you declare one and your typedef counts as a declaration. The compiler must be able to statically deduce the size of every type, and it can't work out how big an "address" is without a definition of unordered_map (and string).

The simplest way to fix this would be to include the unordered_map header in your A.h file (also the string header for std::string as it will need that too).

You absolutely will run into this problem many times in the future without a good understanding of the C/C++ compilation model, so I would suggest you get to grips with this before you write much more code. It is very easy to run afoul of the declaration and definition rules if you aren't aware of them.
 

gaugebozo

Member
Hi Programming GAF! Can you help me find an IDE to use for C++ programming on Linux? I need it for scientific calculations (Monte Carlo integration) that amount to, "Run this piece of code a few million times. Now do it again." They are not very complex as far as design goes, but I'd like some ability to debug and to get some experience for post-degree (read: real world numerical computing) reasons. I used a little Visual Studio when I was working on Windows, but something like that feels like overkill. Bonus points if it can also handle Python.

I've worked on a lot of different performance-intensive projects - triggering systems for the LHC, ...

!

I'm a theorist. What triggers did you work on?
 

Kansoku

Member
You must provide a definition of all non-pointer types before you declare one and your typedef counts as a declaration. The compiler must be able to statically deduce the size of every type, and it can't work out how big an "address" is without a definition of unordered_map (and string).

The simplest way to fix this would be to include the unordered_map header in your A.h file (also the string header for std::string as it will need that too).

You absolutely will run into this problem many times in the future without a good understanding of the C/C++ compilation model, so I would suggest you get to grips with this before you write much more code. It is very easy to run afoul of the declaration and definition rules if you aren't aware of them.

Hmm, I see. That worked.

Can you recommend a place to read about it?
 
Hmm, I see. That worked.

Can you recommend a place to read about it?

While I normally would not recommend that someone new to C++ should read the language standard I cannot find any more succint explanations of this topic than in the standard. Also, you are writing a compiler, so you may gain some more insight into how langauges are designed and implemented from reading the standard. Here is a link to the latest copy of the standard. Sections 3.1 and 3.2 are the most pertinent to the problem you encountered.

The whole document is written in standardese and can be quite difficult to read, but most C++ compilers use the terminology of the standard when reporting errors and warnings, so having an understanding of what the terms mean is very useful in general when dealing with C++.
 

Kansoku

Member
While I normally would not recommend that someone new to C++ should read the language standard I cannot find any more succint explanations of this topic than in the standard. Also, you are writing a compiler, so you may gain some more insight into how langauges are designed and implemented from reading the standard. Here is a link to the latest copy of the standard. Sections 3.1 and 3.2 are the most pertinent to the problem you encountered.

The whole document is written in standardese and can be quite difficult to read, but most C++ compilers use the terminology of the standard when reporting errors and warnings, so having an understanding of what the terms mean is very useful in general when dealing with C++.

Thanks. It's a bit overwhelming, but I'll work my way trough it slowly :)
 

vypek

Member
Hi Programming GAF! Can you help me find an IDE to use for C++ programming on Linux? I need it for scientific calculations (Monte Carlo integration) that amount to, "Run this piece of code a few million times. Now do it again." They are not very complex as far as design goes, but I'd like some ability to debug and to get some experience for post-degree (read: real world numerical computing) reasons. I used a little Visual Studio when I was working on Windows, but something like that feels like overkill. Bonus points if it can also handle Python.

Hm, maybe you can consider Code::Blocks?
 

JesseZao

Member
I know it's naive to think possible, but I really don't want to see Asp.net web forms ever again. I don't understand why people don't use mvc. It's just so much more elegant and clean. Clean up your legacy debt!
 

Celcius

°Temp. member
Taking Java in college, trying to wrap my head around it while having limited time for it. Any good resources to consult or anyone who would be great for helping me out? Thanks.

EDIT: My textbook is this by the way, Absolute Java 5th Edition.

One of my favorite beginning Java books is Thinking In Java, by Bruce Eckel. The author allows you to download the 3rd edition of the book from his website for free: http://www.mindview.net/Books/TIJ/
I own a physical copy of the 3rd edition that I've read twice and I highly recommend it.
 

Jarmel

Banned
Are there any good sources on programming in Linux? Have this introductory project for Linux involving future dates and I'm just not understanding any of it.
 
Top Bottom