Are there any good sources on programming in Linux? Have this introductory project for Linux involving future dates and I'm just not understanding any of it.
You mean the linux environment, system calls, or linux systems implementation?
Are there any good sources on programming in Linux? Have this introductory project for Linux involving future dates and I'm just not understanding any of it.
You mean the linux environment, system calls, or linux systems implementation?
I am having a dispute with a professor's grading on an assignment. First I want to ask people how they would interpret the assignment.
The assignment says the program should ask for two strings from the user. The program wil then state whether the second string is a substring of the first string. That's all the assignment says. From that, would you believe that the definition of substring for this problem is or is not case sensitive?
I assumed it is case sensitive because strings are nothing more than sequences of characters. A and a are not the same character. "apple" is not a substring of "Apples". The professor disagrees and believes her problem implied ignoring cases. She says I should have wrote that the user must enter all lower cases. I lost 40 % of the points this problem had on the homework due to this. I'm going to her office today to dispute this with her. How do you guys feel about this?
The assignment says the program should ask for two strings from the user. The program wil then state whether the second string is a substring of the first string. That's all the assignment says.
I am having a dispute with a professor's grading on an assignment. First I want to ask people how they would interpret the assignment.
The assignment says the program should ask for two strings from the user. The program will then state whether the second string is a substring of the first string. That's all the assignment says. From that, would you believe that the definition of substring for this problem is or is not case sensitive?
I assumed it is case sensitive because strings are nothing more than sequences of characters. A and a are not the same character. "apple" is not a substring of "Apples". The professor disagrees and believes her problem implied ignoring cases. She says I should have wrote that the user must enter all lower cases. I lost 40 % of the points this problem had on the homework due to this. I'm going to her office today to dispute this with her. How do you guys feel about this?
I know it's naive to think possible, but I really don't want to see Asp.net web forms ever again. I don't understand why people don't use mvc. It's just so much more elegant and clean. Clean up your legacy debt!
I mean, if she's having you assume things, that's not a very good precedent for a programming assignment. I would have interpreted it the same as you.
If that's literally all the information there is (so no other conventions in earlier tasks or something alike) I think you are in the right. For me substring means that the characters are equal and 'A' != 'a'. On top of that it's pretty ridiculous to subtract 40% of points for that.
She ended up giving back most of the credit. She agreed with my points. I feel like she should have given me full credit back, but I didn't want to fight over 2 points. More importantly, she assured me that other ambiguous instructions won't be handled the same way in further assignments.I'm with you. I learned that a isn't the same as A in programming, unless stated otherwise. She should have specified that. Are there any other people that had the same problem? If more people go, it will be easier to persuade her that this is wrong.
Case insensitivity needs to be indicated in most common uses of substring and string equals, so case sensitive is the default way of handling things. So in the absence of information case sensitivity is the accepted way.She ended up giving back most of the credit. She agreed with my points. I feel like she should have given me full credit back, but I didn't want to fight over 2 points. More importantly, she assured me that other ambiguous instructions won't be handled the same way in further assignments.
I am having a dispute with a professor's grading on an assignment. First I want to ask people how they would interpret the assignment.
The assignment says the program should ask for two strings from the user. The program will then state whether the second string is a substring of the first string. That's all the assignment says. From that, would you believe that the definition of substring for this problem is or is not case sensitive?
I assumed it is case sensitive because strings are nothing more than sequences of characters. A and a are not the same character. "apple" is not a substring of "Apples". The professor disagrees and believes her problem implied ignoring cases. She says I should have wrote that the user must enter all lower cases. I lost 40 % of the points this problem had on the homework due to this. I'm going to her office today to dispute this with her. How do you guys feel about this?
She was fine with us using predefined methods, but I did it by hand.At first read I would assume that the project is case insensitive but you are definitely right. Glad you got most of your points back and that she will take care of the ambiguity from now on. Just wondering, did you students have to handle defining/finding a substring yourselves or could you use something pre-written?
Anyone tried the Coursera courses related to Android Programming? I think I will start one but it says I should have some previous Java knwoledge (which I dont). Should I give it a try ?
Anyone tried the Coursera courses related to Android Programming? I think I will start one but it says I should have some previous Java knwoledge (which I dont). Should I give it a try ?
Hey guys. I'm taking a course in requirements engineering, and we're specifying requirements for a project we have. We have ended up with a lot of requirements that are for creating, deleting and updating entities in the system, but it seems awfully redundant to repeat yourself for every different type of entity we have. The thing is that many entities, but not all, should follow CRUD exactly, so we can't really have update and delete follow implicitly from create for every entity. How do we handle this properly?
#include <iostream>
using namespace std;
int main()
{
double humans, dogs, ants, spiders;
cout << "How many humans are there?";
cin >> humans;
cout << "How many dogs are there?";
cin >> dogs;
cout << "How many ants are there?";
cin >> ants;
cout << "How many spiders are there?";
cin >> spiders;
cout << "The average number of legs for all creatures is " << (humans*2+dogs*4+ants*6+spiders*8)%(humans+dogs+ants+spiders);
cout << "The average number of legs for all mammals is " << (humans*2+dogs*4)%(humans+dogs);
cout << "The average number of legs for all insects is " << (ants*6+spiders*8)%(ants+spiders);
system("pause");
return 0;
}
Exercise:
Write a program that asks the user for the number of:
1.humans.
2.dogs.
3.ants.
4.spiders.
Have the program output the average number of legs for all:
1.creatures.
2.mammals.
3.Insects.
Assume that
1.number of humans + number of dogs > 0
2.number of ants + number of spiders > 0
Note: Just in case you didn't know:Ants have 6 legs.Spiders have 8 legs.
Example:
How many humans are there? 2
How many dogs are there? 5
How many ants are there? 1
How many spiders are there? 1
The average number of legs for all creatures is 4.22222.
The average number of legs for all mammals is 3.42857.
The average number of legs for all insects is 7.
So here's my attempt..
Code:#include <iostream> using namespace std; int main() { double humans, dogs, ants, spiders; cout << "How many humans are there?"; cin >> humans; cout << "How many dogs are there?"; cin >> dogs; cout << "How many ants are there?"; cin >> ants; cout << "How many spiders are there?"; cin >> spiders; cout << "The average number of legs for all creatures is " << (humans*2+dogs*4+ants*6+spiders*8)%(humans+dogs+ants+spiders); cout << "The average number of legs for all mammals is " << (humans*2+dogs*4)%(humans+dogs); cout << "The average number of legs for all insects is " << (ants*6+spiders*8)%(ants+spiders); system("pause"); return 0; }
im getting this message: "invalid operands of types `double' and `double' to binary `operator%'"
% is the modulo operator (remainder), not division /.
im getting this message: "invalid operands of types `double' and `double' to binary `operator%'"
Do you have programming knowledge in other languages? (if yes, which language(s)?)
If you know some basic OOP concepts like implementing interfaces etc.
Java should not be a road block to start android development.
Maybe some basic syntax knowledge would be preferred.
C#, definitely. Download the free version of Visual Studio. Make some simple command line applications, first. Find a Comp Sci 101 textbook and read it. Dick around in Unity and try to apply the lessons from the book into your personal projects. You will have rapid progress in your first year.Hey everyone,
I have recently developed an interest in programming and would really love to get a better understanding of it. My ultimate goal is to learn how to create video games, and don't worry I understand that that will take thousands upon thousands of hours to learn and that there are a multitude of other factors involved. More the reason to get going right now! I am pretty much completely new to it, all I know how to do is very basic HTML and CSS. After launching Unity for the first time, I have absolutely no idea where to start as you may expect. What would you all recommend for my situation? Should I learn Java or C#? Or both? Where are the best places for a complete beginner to start learning these languages? I'm assuming I have hundreds of hours of learning to go before I should even attempt anything in Unity, but I'm curious if there are actually some benefits of messing around with it as I start learning some code.
I'm really looking forward to getting some feedback. Please include as many links to tutorials and what not as you want! I'm unemployed for the next couple weeks so I want to get right into this!
C#, definitely. Download the free version of Visual Studio. Make some simple command line applications, first. Find a Comp Sci 101 textbook and read it. Dick around in Unity and try to apply the lessons from the book into your personal projects. You will have rapid progress in your first year.
At some point you'll want some sort of formal education so you can learn "the right way" to code, but dicking around will take you a long way in the beginning.Thanks! I'll start that download now.
A long time ago I learned very basic VB in high school, then in University they taught us Pascal (lol), basic stuff as well. So basically I don't know almost anything, but it was always quite easy for me to understand things and put them to work.
I started the course, 2 lectures (weeks) it basically talks about Android in general, some specific Android classes and stuff like that, no programming, so it is fine, good to get a sense of it. Somebody posted about one book for learning Java, I downloaded it and will try to learn by myself.
I don't but I will read about that, thanks.
Syntax is also a good first step, but won't be difficult to get.
Hey everyone,
I have recently developed an interest in programming and would really love to get a better understanding of it. My ultimate goal is to learn how to create video games, and don't worry I understand that that will take thousands upon thousands of hours to learn and that there are a multitude of other factors involved. More the reason to get going right now! I am pretty much completely new to it, all I know how to do is very basic HTML and CSS. After launching Unity for the first time, I have absolutely no idea where to start as you may expect. What would you all recommend for my situation? Should I learn Java or C#? Or both? Where are the best places for a complete beginner to start learning these languages? I'm assuming I have hundreds of hours of learning to go before I should even attempt anything in Unity, but I'm curious if there are actually some benefits of messing around with it as I start learning some code.
I'm really looking forward to getting some feedback. Please include as many links to tutorials and what not as you want! I'm unemployed for the next couple weeks so I want to get right into this!
Variable = Object.attribute
Variable = Object.getAttribute()
Variable = Object.setAttribute(Value)
Quick question about some Object Oriented programming.
So in most OO languages you can use dot notation to access the attributes of an object.
For example
Code:Variable = Object.attribute
However this method seems to be frowned upon and instead you are supposed to use "getter" and "setter" methods like this
Code:Variable = Object.getAttribute() Variable = Object.setAttribute(Value)
My question is: Why?
What I mean is why is making separate methods the "correct" way to do things and simply referencing with dot notation the "wrong" way?
What kind of complications can arise from using dot notation instead of get/set methods?
Quick question about some Object Oriented programming.
So in most OO languages you can use dot notation to access the attributes of an object.
For example
Code:Variable = Object.attribute
However this method seems to be frowned upon and instead you are supposed to use "getter" and "setter" methods like this
Code:Variable = Object.getAttribute() Variable = Object.setAttribute(Value)
My question is: Why?
What I mean is why is making separate methods the "correct" way to do things and simply referencing with dot notation the "wrong" way?
What kind of complications can arise from using dot notation instead of get/set methods?
Quick question about some Object Oriented programming.
So in most OO languages you can use dot notation to access the attributes of an object.
For example
Code:Variable = Object.attribute
However this method seems to be frowned upon and instead you are supposed to use "getter" and "setter" methods like this
Code:Variable = Object.getAttribute() Variable = Object.setAttribute(Value)
My question is: Why?
What I mean is why is making separate methods the "correct" way to do things and simply referencing with dot notation the "wrong" way?
What kind of complications can arise from using dot notation instead of get/set methods?
C# will let you cheat.Ah I see that makes a bit more sense then. I guess I just find them a little clunky but it makes sense. Thanks for the replies everyone.
I just started a job working at a serious games company on monday.
Job sounded real good on paper, with a focus on developing native mobile and mobile web apps and the people are really great and I really like the atmosphere, but god, programming in JavaScript is such a damn hassle! :/ When is wasm coming again? I can't wait to replace this god awful language, atleast on the client-side, I've heard good things from Node.js.
I need to rewatch Blow's talk where he explains his coding style. He subverts almost every OO guideline.http://www.twitch.tv/naysayer88
Jonathan Blow is streaming, he's working on his self-developed language (Jai). Currently he's trying to integrate stb_vorbis (Ogg Vorbis library) into the game he's developing in the language.
Page long methods are terrifying.I need to rewatch Blow's talk where he explains his coding style. He subverts almost every OO guideline.
Page long methods are terrifying.
I know his justification for it, but I still disagree because splitting a method into smaller subprocedures is also about comprehensibility. If the pieces don't need to be reused and they'll clutter up the namespace, they should be definitions that are local to the function. In fact I would argue that not having little functions everywhere building up to larger pieces of functionality leads to a situation where one goes back and tries to fix a definition (inevitably) but during the process of trying to comprehend the function one ends up having to keep a lot more in his head than would have been necessary.I think I actually saw the stream where he explained it. IIRC he said that, for methods in which you don't expect any part of it to be reused, it doesn't make sense to split long methods into subprocedures because it just adds more things to remember.
That's basically his argument.I think I actually saw the stream where he explained it. IIRC he said that, for methods in which you don't expect any part of it to be reused, it doesn't make sense to split long methods into subprocedures because it just adds more things to remember.
I know his justification for it, but I still disagree because splitting a method into smaller subprocedures is also about comprehensibility. If the pieces don't need to be reused and they'll clutter up the namespace, they should be definitions that are local to the function. In fact I would argue that not having little functions everywhere building up to larger pieces of functionality leads to a situation where one goes back and tries to fix a definition (inevitably) but during the process of trying to comprehend the function one ends up having to keep a lot more in his head than would have been necessary.
Page long methods are terrifying.
I think I actually saw the stream where he explained it. IIRC he said that, for methods in which you don't expect any part of it to be reused, it doesn't make sense to split long methods into subprocedures because it just adds more things to remember.
I just started a job working at a serious games company on monday.
Job sounded real good on paper, with a focus on developing native mobile and mobile web apps and the people are really great and I really like the atmosphere, but god, programming in JavaScript is such a damn hassle! :/ When is wasm coming again? I can't wait to replace this god awful language, atleast on the client-side, I've heard good things from Node.js.
The problem with the application I support is 98% due to its horrid design and 2% due to the natural complexity of the business. And despite working on the application for years, much of the original problems remain (case in point, the 1500+ line loop) because new functionality seems to always get prioritized above dealing with the technical debt of bad, hard to read, harder to maintain code.