• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Hey guys,

Can I ask for a bit of help with an issue I'm having?

I've got python 2.7 and I can't get bloody pygame to import to IDLE, ever! I tried it on my mac and on PC and neither works.

I tried to replacing my 64bit python with 32 bit python and then downloading the 32 bit installer for pygame but it didn't work, computer says pygame has installed but I can't import pygame.

I've been googling for a while now it's not really working since I'm kind of new to programming and a lot of stuff is just going over my head. I always get the below error:

Code:
Traceback (most recent call last):
  File "<pyshell#0>", line 1, in <module>
    import pygame
ImportError: No module named pygame
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
A Brief, Incomplete, and Mostly Wrong History of Programming Languages

1801 - Joseph Marie Jacquard uses punch cards to instruct a loom to weave "hello, world" into a tapestry. Redditers of the time are not impressed due to the lack of tail call recursion, concurrency, or proper capitalization.

1842 - Ada Lovelace writes the first program. She is hampered in her efforts by the minor inconvenience that she doesn't have any actual computers to run her code. Enterprise architects will later relearn her techniques in order to program in UML.

1936 - Alan Turing invents every programming language that will ever be but is shanghaied by British Intelligence to be 007 before he can patent them.

1936 - Alonzo Church also invents every language that will ever be but does it better. His lambda calculus is ignored because it is insufficiently C-like. This criticism occurs in spite of the fact that C has not yet been invented.

1940s - Various "computers" are "programmed" using direct wiring and switches. Engineers do this in order to avoid the tabs vs spaces debate.

1957 - John Backus and IBM create FORTRAN. There's nothing funny about IBM or FORTRAN. It is a syntax error to write FORTRAN while not wearing a blue tie.

1958 - John McCarthy and Paul Graham invent LISP. Due to high costs caused by a post-war depletion of the strategic parentheses reserve LISP never becomes popular[1]. In spite of its lack of popularity, LISP (now "Lisp" or sometimes "Arc") remains an influential language in "key algorithmic techniques such as recursion and condescension"[2].

1959 - After losing a bet with L. Ron Hubbard, Grace Hopper and several other sadists invent the Capitalization Of Boilerplate Oriented Language (COBOL) . Years later, in a misguided and sexist retaliation against Adm. Hopper's COBOL work, Ruby conferences frequently feature misogynistic material.

1964 - John Kemeny and Thomas Kurtz create BASIC, an unstructured programming language for non-computer scientists.

1965 - Kemeny and Kurtz go to 1964.

1970 - Guy Steele and Gerald Sussman create Scheme. Their work leads to a series of "Lambda the Ultimate" papers culminating in "Lambda the Ultimate Kitchen Utensil." This paper becomes the basis for a long running, but ultimately unsuccessful run of late night infomercials. Lambdas are relegated to relative obscurity until Java makes them popular by not having them.

1970 - Niklaus Wirth creates Pascal, a procedural language. Critics immediately denounce Pascal because it uses "x := x + y" syntax instead of the more familiar C-like "x = x + y". This criticism happens in spite of the fact that C has not yet been invented.

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.

1972 - Alain Colmerauer designs the logic language Prolog. His goal is to create a language with the intelligence of a two year old. He proves he has reached his goal by showing a Prolog session that says "No." to every query.

1973 - Robin Milner creates ML, a language based on the M&M type theory. ML begets SML which has a formally specified semantics. When asked for a formal semantics of the formal semantics Milner's head explodes. Other well known languages in the ML family include OCaml, F#, and Visual Basic.

1980 - Alan Kay creates Smalltalk and invents the term "object oriented." When asked what that means he replies, "Smalltalk programs are just objects." When asked what objects are made of he replies, "objects." When asked again he says "look, it's all objects all the way down. Until you reach turtles."

1983 - In honor of Ada Lovelace's ability to create programs that never ran, Jean Ichbiah and the US Department of Defense create the Ada programming language. In spite of the lack of evidence that any significant Ada program is ever completed historians believe Ada to be a successful public works project that keeps several thousand roving defense contractors out of gangs.

1983 - Bjarne Stroustrup bolts everything he's ever heard of onto C to create C++. The resulting language is so complex that programs must be sent to the future to be compiled by the Skynet artificial intelligence. Build times suffer. Skynet's motives for performing the service remain unclear but spokespeople from the future say "there is nothing to be concerned about, baby," in an Austrian accented monotones. There is some speculation that Skynet is nothing more than a pretentious buffer overrun.

1986 - Brad Cox and Tom Love create Objective-C, announcing "this language has all the memory safety of C combined with all the blazing speed of Smalltalk." Modern historians suspect the two were dyslexic.

1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born.

1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that "a monad is a monoid in the category of endofunctors, what's the problem?"

1991 - Dutch programmer Guido van Rossum travels to Argentina for a mysterious operation. He returns with a large cranial scar, invents Python, is declared Dictator for Life by legions of followers, and announces to the world that "There Is Only One Way to Do It." Poland becomes nervous.

1995 - At a neighborhood Italian restaurant Rasmus Lerdorf realizes that his plate of spaghetti is an excellent model for understanding the World Wide Web and that web applications should mimic their medium. On the back of his napkin he designs Programmable Hyperlinked Pasta (PHP). PHP documentation remains on that napkin to this day.

1995 - Yukihiro "Mad Matz" Matsumoto creates Ruby to avert some vaguely unspecified apocalypse that will leave Australia a desert run by mohawked warriors and Tina Turner. The language is later renamed Ruby on Rails by its real inventor, David Heinemeier Hansson. [The bit about Matsumoto inventing a language called Ruby never happened and better be removed in the next revision of this article - DHH].

1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.

1996 - James Gosling invents Java. Java is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Sun loudly heralds Java's novelty.

2001 - Anders Hejlsberg invents C#. C# is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Microsoft loudly heralds C#'s novelty.

2003 - A drunken Martin Odersky sees a Reese's Peanut Butter Cup ad featuring somebody's peanut butter getting on somebody else's chocolate and has an idea. He creates Scala, a language that unifies constructs from both object oriented and functional languages. This pisses off both groups and each promptly declares jihad.

http://james-iry.blogspot.de/2009/05/brief-incomplete-and-mostly-wrong.html
 
That moment when you pinpoint the odd packet size discrepancy you've been chasing for the past week, and realize it's a product of a (equally convoluted) defect in an irreplaceable 3rd party library.

mBEbGgn.png


Damn you, libUDT!
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
That moment when you pinpoint the odd packet size discrepancy you've been chasing for the past week, and realize it's a product of a (equally convoluted) defect in an irreplaceable 3rd party library.

That sweet second of joy between finding a bug and realizing how much work it'll be to fix it.
 
That sweet second of joy between finding a bug and realizing how much work it'll be to fix it.
Oh, there was no joy... Just horror.

It's open source right? At least you can fix it :)
Yeah... probably not. Probably about 80 man hours... to sink into a UDP-based protocol implementation, understand the project methodology, reproduce the problem in an isolated environment, fix it.

Bigger problem is that this is a hobby project! Ain't nobody got time to fix somebody else's broken epoll()!
 
Oh, there was no joy... Just horror.


Yeah... probably not. Probably about 80 man hours... to sink into a UDP-based protocol implementation, understand the project methodology, reproduce the problem in an isolated environment, fix it.

Bigger problem is that this is a hobby project! Ain't nobody got time to fix somebody else's broken epoll()!

Ahh, yea if it's for a hobby project then yea, not sure I could convince myself to do it either.

Work-wise though, it's funny I used to dread the idea of having to mess with someone else's open source crap. But for the past probably 3 years or so I've been employed full time working on open source projects, and it's so liberating. I find bugs in all kinds of external libraries that we use, and I like that I don't have to have crap like not being able to fix a bug standing in my way. I've even had to fix bugs in git, and once I get annoyed enough (which is likely to be soon) I'm probably going to submit some patches to CMake too.
 

Nesotenso

Member
working on an exercise to understand the concept of arrays and pointers

Code:
	int A[] = {1,2,3,4,5};
	int i;
	int *p = A;
        *p++;
	printf("Value of first index:%d\n", *p);
	for(i = 0; i < 5; i++)
		{ 
			printf("Value of %d\n", A[i]);
		}
	return 0;

When I am printing out the first index with the first print statement, 1 is incremented to 2.
But for the second print statement in the loop the values remain as it is. Can anyone explain why?
 
working on an exercise to understand the concept of arrays and pointers

Code:
	int A[] = {1,2,3,4,5};
	int i;
	int *p = A;
        *p++;
	printf("Value of first index:%d\n", *p);
	for(i = 0; i < 5; i++)
		{ 
			printf("Value of %d\n", A[i]);
		}
	return 0;

When I am printing out the first index with the first print statement, 1 is incremented to 2.
But for the second print statement in the loop the values remain as it is. Can anyone explain why?

I'm not sure I understand the wording of your question. I can tell from reading the code what this will output, but can you write out the output as well as what you expect the output should be?
 

Nesotenso

Member
I'm not sure I understand the wording of your question. I can tell from reading the code what this will output, but can you write out the output as well as what you expect the output should be?

sorry

Code:
Value of first index:2
Value of 1 // expect 2 here instead of 1
Value of 2
Value of 3
Value of 4
Value of 5

I was wondering why the value in the first index was still coming out as 1 in the for loop. When I am working with the initial pointer shouldn't the value in the memory address pointed to be changed permanently?
 
operation on array elements takes precedence over the dereferencing operator?

Yea, ++ is at the top, * is at the bottom. So what you wrote is equivalent to *(p++)

We can rewrite that line as:

p=p+1
*(p-1)

Then you print it. None of the array values ever got incremented, you just printed the second value, or A[1].

Compare to (*p)++, which is the same as

*p = *p + 1
 

Nesotenso

Member
Yea, ++ is at the top, * is at the bottom. So what you wrote is equivalent to *(p++)

We can rewrite that line as:

p=p+1
*(p-1)

Then you print it. None of the array values ever got incremented, you just printed the second value, or A[1].

Compare to (*p)++, which is the same as

*p = *p + 1

ok got it. so A[1] was getting printed. thanks.
 

Zeus7

Member
When connecting to a database in VS2013, you create the database and then connect to it by adding the data source and choosing the connection string if I am following the MSDN tutorial correctly.

However, once this is done, do you still need to create an instance of the connection in the code like this:

Code:
SqlConnection connection = new SqlConnection("");

Inside the quotation marks do I have to insert this path:

Code:
Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\Forms\R7DB.mdf;Integrated Security=True

I am confused, the MSDN tutorial just stopped at connecting to a database. I want to make sure I can connect to the database in the code and then start inputting data to the database from a textbox etc.
 

jokkir

Member
I have a question more on the design and structure of a program. I have an assignment that has several tasks but for this question, i'll talk about the calculator task. This calculator is just supposed to take two values from the user and then display the sum, product, difference, and quotient of the two values inputted. However, when typing the code, I'm not sure if I'm correctly structuring the program the best way possible.

How I have it structured now is I have separate methods/functions for the calculations, and then one for printing the menu and in that print_menu method, I have it take in user input. Is that correct or would that need to be in a separate method so it's separate from a method mainly used for printing the menu (and results)? And with that said, would it be better to have another function to print the results away from the print_menu method?

I've been thinking about this more but not sure which way is correct.

This is pseudo code of what I'm talking about
Code:
public class Calc {
    public static void main(String[] args){
        print_menu();
    }

    public static void print_menu(){
        /*variable declaration*/

        /*Hello message and prompt to enter two numbers*/

        System.out.print("First number: ");
        a = userInput.nextInt();
        System.out.print("Second number: ");
        b2 = userInput.nextInt();

        /*Total print*/
        System.out.println("Sum: " + sum(a, b));
        System.out.println("Product: " + product(a, b));
        System.out.println("Difference: " + difference(a, b));
        System.out.println("Quotient: " + quotient(a, b));
    }

    public static int sum(int a, int b){
        return a + b;
    }
    public static int product(int a, int b){
        return a * b;
    }
    public static int difference(int a, int b){
        return a - b;
    }
    public static int quotient(int a, int b){
        return a / b;
    }
}
 

Kalnos

Banned
I've been thinking about this more but not sure which way is correct.

This is pseudo code of what I'm talking about

Basically, just write code and if you find that a function is growing too large or that you're repeating certain sections of the code then you can separate those portions into their own functions. There is no real 'correct' way to do things, but what you did makes the code reusable and easier to read.

Your code is fine now where it is.. no need to go crazy separating it out. KISS and YAGNI are principles to live by.
 
How I have it structured now is I have separate methods/functions for the calculations, and then one for printing the menu and in that print_menu method, I have it take in user input. Is that correct or would that need to be in a separate method so it's separate from a method mainly used for printing the menu (and results)? And with that said, would it be better to have another function to print the results away from the print_menu method?

I've been thinking about this more but not sure which way is correct.

This is pseudo code of what I'm talking about
Code:
public class Calc {
    public static void main(String[] args){
        print_menu();
    }

    public static void print_menu(){
        /*variable declaration*/

        /*Hello message and prompt to enter two numbers*/

        System.out.print("First number: ");
        a = userInput.nextInt();
        System.out.print("Second number: ");
        b2 = userInput.nextInt();

        /*Total print*/
        System.out.println("Sum: " + sum(a, b));
        System.out.println("Product: " + product(a, b));
        System.out.println("Difference: " + difference(a, b));
        System.out.println("Quotient: " + quotient(a, b));
    }

    public static int sum(int a, int b){
        return a + b;
    }
    public static int product(int a, int b){
        return a * b;
    }
    public static int difference(int a, int b){
        return a - b;
    }
    public static int quotient(int a, int b){
        return a / b;
    }
}

If this is all your program is doing, then creating a separate function print_menu() is unneccessary overhead. All of that function's content ought rather be in the main function.

Two (or three) immediate mistakes I see, even though it is pseudo code:
- Input variables a, b2 have no (explicit) type declaration.
- Input variable b2 is never used, and an undefined argument b is passed to all four arithmetic functions
- the Quotient function returns an integer value and thus will truncate the floating point value to integer
 

jokkir

Member
Basically, just write code and if you find that a function is growing too large or that you're repeating certain sections of the code then you can separate those portions into their own functions. There is no real 'correct' way to do things, but what you did makes the code reusable and easier to read.

Your code is fine now where it is.. no need to go crazy separating it out. KISS and YAGNI are principles to live by.

Thank you! I'll look into those two principles.

If this is all your program is doing, then creating a separate function print_menu() is unneccessary overhead. All of that function's content ought rather be in the main function.

Two (or three) immediate mistakes I see, even though it is pseudo code:
- Input variables a, b2 have no (explicit) type declaration.
- Input variable b2 is never used, and an undefined argument b is passed to all four arithmetic functions
- the Quotient function returns an integer value and thus will truncate the floating point value to integer

Oops, I just didn't want to copy and paste my code 100% just incase of some school policy thing so I made a mistake of changing variable names and whatnot but the declarations for both int a, and int b are there (they're both = 0; up until the user input then it changes value.

As for the quotient part, it was specifically said to use integers so I can't change the return value of it to a float or double.

Hmm, yeah, I might move the stuff from print_menu() to main.
 

Ahnez

Member
Thank you! I'll look into those two principles.



Oops, I just didn't want to copy and paste my code 100% just incase of some school policy thing so I made a mistake of changing variable names and whatnot but the declarations for both int a, and int b are there (they're both = 0; up until the user input then it changes value.

As for the quotient part, it was specifically said to use integers so I can't change the return value of it to a float or double.

Hmm, yeah, I might move the stuff from print_menu() to main.

One more detail..
If b == 0, then the quotient function will cause an error (division by 0)
 

Zoe

Member
When connecting to a database in VS2013, you create the database and then connect to it by adding the data source and choosing the connection string if I am following the MSDN tutorial correctly.

However, once this is done, do you still need to create an instance of the connection in the code like this:

Code:
SqlConnection connection = new SqlConnection("");

Inside the quotation marks do I have to insert this path:

Code:
Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\Forms\R7DB.mdf;Integrated Security=True

I am confused, the MSDN tutorial just stopped at connecting to a database. I want to make sure I can connect to the database in the code and then start inputting data to the database from a textbox etc.

Yes, you will need to open a connection to the database each time you need it by plugging that connection string into the new SqlConnection.
 

jokkir

Member
One more detail..
If b == 0, then the quotient function will cause an error (division by 0)

Oh yeah, I know about that but the professor said not to worry about any exceptions for now so I'll just leave it out. I could add it but he was putting emphasis on doing things exactly as the instructions so I want to avoid any extra things to add.
 

GK86

Homeland Security Fail
Does anyone know any good guides/tutorials for looping url pages (when web scraping using python)? Thanks.
 

Chris R

Member
When connecting to a database in VS2013, you create the database and then connect to it by adding the data source and choosing the connection string if I am following the MSDN tutorial correctly.

You can also store the connection string in your web.config/app.config file and reference it via ConfigurationManager.ConnectionStrings["connectionStringNameHere"]
 

teiresias

Member
So I'm coding something for work that involves a number of separate processes of varying priority with one or two critical processes that must meet deadlines and have certain rates of execution > 100Hz.

These two critical processes I'm thinking of combining into one process since I think they will need to be at the same rate anyway and share some of the same data, but my question is really how the more experienced OS developers would go about bench testing these sorts of things in order to:

1) see a raw execution time of a section of code on a particular platform to see if a particular platform has absolutely any hope of meeting rate needs (ie. if the processor can't execute the loop fast enough anyway there will be no hope once it has to be switched out with a bunch of other processes).

2) put the whole system together and then have some way of monitoring in real-time whether a critical process is missing its deadlines

This is likely to be done on a Linux distro (debian likely) with a the real-time patch applied. Platform unknown, but I'm starting out low-end on Beaglebone (single-core ARM) and working my way up only if necessary since weight is an issue for the hardware platform here.

I am in NO WAY a Linux developer (mainly a hardware designer and do to-the-metal coding on microcontrollers for various support functions) so all the things I'm learning about shared memory and general linux development are interesting but slow . . . this is only compounded by the fact that this is embedded linux and not desktop with large processor overhead, haha. Anyway, just wondering what libraries/headers/functions I should be looking into to get the data above.
 

Two Words

Member
I just completed my homework where I had to create a program in C++ that validated that a password had at least 1 upper case letter, one lower case letter, 1 numerical digit, and at least 6 characters. I used functions like isupper(), islower(), and isdigit(). These functions are supposed to be in the cctype library. I forgot to have the header statement #include <cctype> in my code. However, my program ran perfectly fine. I was using Code::Blocks on Windows, if that matters. I'm confused how it worked though.
 

leroidys

Member
So I'm coding something for work that involves a number of separate processes of varying priority with one or two critical processes that must meet deadlines and have certain rates of execution > 100Hz.

These two critical processes I'm thinking of combining into one process since I think they will need to be at the same rate anyway and share some of the same data, but my question is really how the more experienced OS developers would go about bench testing these sorts of things in order to:

1) see a raw execution time of a section of code on a particular platform to see if a particular platform has absolutely any hope of meeting rate needs (ie. if the processor can't execute the loop fast enough anyway there will be no hope once it has to be switched out with a bunch of other processes).

2) put the whole system together and then have some way of monitoring in real-time whether a critical process is missing its deadlines

This is likely to be done on a Linux distro (debian likely) with a the real-time patch applied. Platform unknown, but I'm starting out low-end on Beaglebone (single-core ARM) and working my way up only if necessary since weight is an issue for the hardware platform here.

I am in NO WAY a Linux developer (mainly a hardware designer and do to-the-metal coding on microcontrollers for various support functions) so all the things I'm learning about shared memory and general linux development are interesting but slow . . . this is only compounded by the fact that this is embedded linux and not desktop with large processor overhead, haha. Anyway, just wondering what libraries/headers/functions I should be looking into to get the data above.

For 1), I mean the simplest thing to do is just put in some clock calls in the specific section. There's also stuff like dtrace, valgrind/callgrind, etc. but from your description I'm not sure if they would be available in your environment.
 

tokkun

Member
So I'm coding something for work that involves a number of separate processes of varying priority with one or two critical processes that must meet deadlines and have certain rates of execution > 100Hz.

These two critical processes I'm thinking of combining into one process since I think they will need to be at the same rate anyway and share some of the same data, but my question is really how the more experienced OS developers would go about bench testing these sorts of things in order to:

1) see a raw execution time of a section of code on a particular platform to see if a particular platform has absolutely any hope of meeting rate needs (ie. if the processor can't execute the loop fast enough anyway there will be no hope once it has to be switched out with a bunch of other processes).

2) put the whole system together and then have some way of monitoring in real-time whether a critical process is missing its deadlines

This is likely to be done on a Linux distro (debian likely) with a the real-time patch applied. Platform unknown, but I'm starting out low-end on Beaglebone (single-core ARM) and working my way up only if necessary since weight is an issue for the hardware platform here.

I am in NO WAY a Linux developer (mainly a hardware designer and do to-the-metal coding on microcontrollers for various support functions) so all the things I'm learning about shared memory and general linux development are interesting but slow . . . this is only compounded by the fact that this is embedded linux and not desktop with large processor overhead, haha. Anyway, just wondering what libraries/headers/functions I should be looking into to get the data above.

Option A: Inline Assembly Calls to Read Cycle Counter
+ High resolution
+ Low impact on code performance (meaning high accuracy)
- Not portable to different processor architectures
- Complex
- Difficult to measure time between executions in a multiprocessor system if you get scheduled on more than one processor

Option B: System Calls to Get Time
+ Simple code
+ Measures absolute time
+ Portable across POSIX systems
- Low resolution
- May impact code performance (potentially low accuracy)

Option C: Code Profiling Tool (Callgrind, gprof, etc.)
+ No code changes
- High performance impact (low accuracy)
- Slower data collection
- May rely on sampling, making it more difficult to capture rare performance events.

Option D: Kernel Mod to Track Scheduling times
+ May already be supported in your realtime kernel
+ Absolute time measurement
+ Fairly high resolution / accuracy
- Very complex to write on your own

I just completed my homework where I had to create a program in C++ that validated that a password had at least 1 upper case letter, one lower case letter, 1 numerical digit, and at least 6 characters. I used functions like isupper(), islower(), and isdigit(). These functions are supposed to be in the cctype library. I forgot to have the header statement #include <cctype> in my code. However, my program ran perfectly fine. I was using Code::Blocks on Windows, if that matters. I'm confused how it worked though.

Most likely <cctype> is being included indirectly by one of the other headers/libraries you are including.
 
I am having an issue with my program and hope someone can help me. I am doing a project where I have to implement S-DES in CBC mode which takes an input file(containing binary) does S-DES/CBC encryption/decryption and then outputs the ciphertext(in binary) to a file. There is other stuff I have to do but I have already been able to do it and I get the expected output. My issue is that I have the writeFile method written but whenever I try to call it in main I get errors.

Could anyone give me an idea on how I could get it working

Here are my main and writeFile methods. There are other methods I have but I don't think they are relevant to my issue.

Code:
.
..
.
 
I can immediately see an error in writeFile() where you try to access byte array buffer[] in the for-loop, but you did not initialize the array (= null).

EDIT: Your int-to-byte cast is also set for false results since the conversion is wrong. An integer has the lenght of 4 bytes.
 
I seem to be having difficulty getting the average of inputed scores and lowest number. It correctly outputs the highest number, but the lowest number only works if the lowest input is the same as declared in the variable, lowest. I had some average code that also wasn't working, but I removed that since I wan't the lowest at least to work.
Here is my code:
Code:
/*
Write a program that prompts the user to enter the number of students in the 
class and each student’s name,score 
and finally displays the name of the student with the highest score, average of 
the entire class, and the lowest score (without name). 
 */
package score_lab6;
import java.util.*;

public class Score_Lab6 {
    public static void main(String[] args) {
        int numberofStudent;
			
            double highest = 0;
            String nameWithHighestScore="";
            Scanner input = new Scanner(System.in);
			
            System.out.print("Please enter the number of students in your class: ");
            numberofStudent = input.nextInt();
            
            String tempName="";
            double tempScore;	
            double lowest = 50;
		while (numberofStudent-- >0){
                    System.out.print("Enter student name and score: ");
			tempName = input.next();
			tempScore = input.nextDouble();
			if (tempScore > highest) {
				highest = tempScore;
				nameWithHighestScore=tempName;
                        }
                        else if (tempScore < lowest) {
                              lowest = tempScore;	
                        }
                }
			System.out.printf("The student with the higheset score is\n %20s%10.2f",nameWithHighestScore, highest);
                        System.out.printf("\n The student with the lowest score is\n %10.2f",lowest);
                        
    }
}

Code:
Please enter the number of students in your class: 2
Enter student name and score: Billy 50
Enter student name and score: Bob 90
The student with the higheset score is
                  Bob     90.00
 The student with the lowest score is
      50.00
But if I put whoever as a score other than 50
Code:
Please enter the number of students in your class: 2
Enter student name and score: Billy 60
Enter student name and score: Bob 90
The student with the higheset score is
                  Bob     90.00
 The student with the lowest score is
      50.00
 
I seem to be having difficulty getting the average of inputed scores and lowest number. It correctly outputs the highest number, but the lowest number only works if the lowest input is the same as declared in the variable, lowest. I had some average code that also wasn't working, but I removed that since I wan't the lowest at least to work.
Here is my code:

What if you never get a value lower than 50? You're comparing numbers in your data set against a number which is not in your data set, which obviously won't work. Try to think harder about the correct algorithm for finding the smallest number in a list. Note that if you choose 60 40 it will work fine, because 40 is less than 50. The problem is you you are finding the lowest of Scores U {50}, even if 50 is not one of the numbers.
 
What if you never get a value lower than 50? You're comparing numbers in your data set against a number which is not in your data set, which obviously won't work. Try to think harder about the correct algorithm for finding the smallest number in a list. Note that if you choose 60 40 it will work fine, because 40 is less than 50. The problem is you you are finding the lowest of Scores U {50}, even if 50 is not one of the numbers.

I made lowest = 100 before and it came out wrong but it turns out because I was using else if instead of just if. The lowest score is now working, so maybe I can get average now.
 

Vostro

Member
Alright I'm not sure if this is possible. I'm attempting this using C# web form. I have a scanner connected as a hardware keyboard via bluetooth. When connected the android software keyboard would not show up because the hardware keyboard is turn on. What I would like to have is a button that would allow user to show the software keyboard. Maybe someone has a suggestions on how this would work with C# or javascript. Thanks.
 
Alright I'm not sure if this is possible. I'm attempting this using C# web form. I have a scanner connected as a hardware keyboard via bluetooth. When connected the android software keyboard would not show up because the hardware keyboard is turn on. What I would like to have is a button that would allow user to show the software keyboard. Maybe someone has a suggestions on how this would work with C# or javascript. Thanks.

This couldn't be more cryptic. Could please elaborate a bit on the hardware in the setup and what you actually try to achieve?
 

msv

Member
This couldn't be more cryptic. Could please elaborate a bit on the hardware in the setup and what you actually try to achieve?
He has a device connected that's recognized as a hardware keyboard, because it's a hardware keyboard android thinks the onscreen keyboard isn't necessary, so it doesn't show.

I don't know the answer though, haven't programmed for Android. No documentation on the virtual keyboard available?
 
He has a device connected that's recognized as a hardware keyboard, because it's a hardware keyboard android thinks the onscreen keyboard isn't necessary, so it doesn't show.

I don't know the answer though, haven't programmed for Android. No documentation on the virtual keyboard available?

So where does C# fit in? You can develop android software in C# now?
 
It is definitely possible to bring up the Android software keyboard, but not simply through a web service only. If possible you should write an Android service that can respond to your web service to show the software keyboard.
 
Top Bottom