• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

NotBacon

Member
Trying to figure out where to go next. This summer I will have what looks to be a painfully simple internship at a decent computer science company doing something involving html, adding content to the website, and hopefully some javascript. This is my last summer before being done with college (I graduate in December) and I am looking to learn something on the side of my internship.

The last few years of my life have had me working in web dev stuff (specifically front end). And I generally enjoy that and kind of want to pursue that at a specific company in town. On the other hand, my girlfriend wants me to move with her to Germany and I saw that there are some cool game developers located there and obviously gaming is a passion of mine.

If I had to break down my interests of what to do after I graduate at this point, it is:
50% Web Dev
25% Game Dev
15% Misc. OOP type work
10% Android App Development

Any recommendations for what I should pick up on the side this summer? I am worried about picking the wrong one and hurting my chances at getting a job in the other field. Or is that not a big issue?

If you really want to do web dev, then build a portfolio in a public repository. Create some APIs, some SPAs, or some full stack projects.

If you really want to do game dev, then that's a different ball game. I've never worked as a game dev, but I imagine you'll need to be pretty strong with C++ and writing high performance code. Manual memory management, identifying run-time complexity, lots of profiling, etc.

Misc. OOP work is very vague.

Android app development is pretty convenient since there is a marketplace where everyone can see your apps/projects. Pretty easy to tell the company you're applying at: "here look I made this and the code is over here, now hire me".
 
Depending on how I did the test they'll decide if I get onto a phone interview and then an on-site interview. Is this normal for internship positions?

When does she want to move to Germany? I think that involves more of a life decision than a career one. I can't speak from experience about the chances of getting a job in another field, but I think for web development, you can build a portfolio to show people that you can do front-end, depending of previous career experience.

Yeah in my experience there'd a large chunk of time getting to the inevitable test, the programming test, and a final interview afterwards. Even for internships. I'm scared for the next time I have to take one of those.

My Girlfriend actually lives in Germany, we met studying abroad. She's coming here for several months to do a university internship so I'm lucky there. I included the whole story just in case someone had perspective on Computer Science in Germany.

I really don't know what to do regarding game programming vs Web dev. I feel like if I work on game dev that experience will at least transfer to more fields than predominantly working on front end Web stuff, but ive never been crazy about math and haven't learned about vectors yet.

Edit : thankfully that appears to come into play in Linear which I'm taking this fall in my last semester. Maybe this summer then I should just try to work on Web dev skills then? If so one last question: should I improve my front end skills or get to learning backend languages?
 

Ledbetter

Member
Yeah in my experience there'd a large chunk of time getting to the inevitable test, the programming test, and a final interview afterwards. Even for internships. I'm scared for the next time I have to take one of those.

My Girlfriend actually lives in Germany, we met studying abroad. She's coming here for several months to do a university internship so I'm lucky there. I included the whole story just in case someone had perspective on Computer Science in Germany.

I really don't know what to do regarding game programming vs Web dev. I feel like if I work on game dev that experience will at least transfer to more fields than predominantly working on front end Web stuff, but ive never been crazy about math and haven't learned about vectors yet.

Edit : thankfully that appears to come into play in Linear which I'm taking this fall in my last semester. Maybe this summer then I should just try to work on Web dev skills then? If so one last question: should I improve my front end skills or get to learning backend languages?

You might as well ask in the Web dev GAF thread, as it is of course more focused on your interests. But in my humble opinion, you should start focusing on web-dev only (as you said you're more interested in that), so you can get a job in that area when you graduate, and then you can start learning game dev on your own and see if you like it. I'm sure someone else here with actual career experience can guide you better.
 

Osiris

I permanently banned my 6 year old daughter from using the PS4 for mistakenly sending grief reports as it's too hard to watch or talk to her
Oh man my plain old C is rusty.

I've got a training day with the Renesas Synergy platform & toolchain tomorrow, as the place I'm doing some C# work for is looking at moving away from ST's STM32 platform.

I already felt rusty just getting Blinky (Embedded environments version of Hello World) working on the STM32, but I managed.

I think I'm going to load up both my laptop and phone with every reference under the sun tonight haha.
 
Holy crap. I just finished the technical test for a bioinformatics internship this summer. A lot of the work sounds like it's going to be python scripting, so I thought that's what the test would be about. Instead, it was using very specific bioinformatics tools to create a very specific type of file.

I was able to complete it (not sure how well, but I was able to at least produce the file they wanted), but damn. It was my first technical test ever and so nerve-wracking.

Is this how most technical tests/interviews are? i.e. they give you no indication of what you'll be tested on?

I'd already had two interviews. Hopefully this is one of the last steps. Two interviews, a technical test, and a final interview seems a bit extreme for an internship, but sounds like that's pretty standard these days.

Update: I've been asked to come in for an in-person interview, so I guess I didn't do too poorly.
 

Ledbetter

Member
Holy crap. I just finished the technical test for a bioinformatics internship this summer. A lot of the work sounds like it's going to be python scripting, so I thought that's what the test would be about. Instead, it was using very specific bioinformatics tools to create a very specific type of file.

I was able to complete it (not sure how well, but I was able to at least produce the file they wanted), but damn. It was my first technical test ever and so nerve-wracking.

Is this how most technical tests/interviews are? i.e. they give you no indication of what you'll be tested on?

I'd already had two interviews. Hopefully this is one of the last steps. Two interviews, a technical test, and a final interview seems a bit extreme for an internship, but sounds like that's pretty standard these days.

Update: I've been asked to come in for an in-person interview, so I guess I didn't do too poorly.

That's good! Hopefully you can land that internship. Did you learn about bioinformatics in school or by yourself? Seems like an interesting topic to me.

I'm in the process of getting through an interview for an internship (explained a few posts above), tomorrow I'll have a phone interview with an engineer and depending on how I do, the last step would be an on-site interview. So I think that may be the norm for a lot of internship applications.
 
That's good! Hopefully you can land that internship. Did you learn about bioinformatics in school or by yourself? Seems like an interesting topic to me.

I'm in the process of getting through an interview for an internship (explained a few posts above), tomorrow I'll have a phone interview with an engineer and depending on how I do, the last step would be an on-site interview. So I think that may be the norm for a lot of internship applications.

It's a really neat topic. I'm finishing up my Master's degree in it right now. I love how it's a blend of life science and computer programming.

Good luck with the phone interview! The place I've been interviewing at is also a startup. This has been an interesting process, when I applied for internships during my last round of graduate school 5+ years ago, I just came in for a short interview and that was that. Times have definitely changed!
 

Aikidoka

Member
I've been having trouble with switching to a different compiler. Up to this point, I've been compiling with ifort compiler (well technically using the mpif90 compiler wrapper for the ifort compiler and all the includes and linking). The code runs perfectly with this.
But, I need to use a PGI compiler (e.g., pgfortran) for openACC/CUDA . I managed to get it compiled by changing the mpif90 wrapper using export OMPI_FC=pgfortran and exchanging other complier flags (essentially -mp instead of -openmp). Yet, now when I run the code, it gives a segmentation fault. Is this usual? why would this happen?
 

NotBacon

Member
I've been having trouble with switching to a different compiler. Up to this point, I've been compiling with ifort compiler (well technically using the mpif90 compiler wrapper for the ifort compiler and all the includes and linking). The code runs perfectly with this.
But, I need to use a PGI compiler (e.g., pgfortran) for openACC/CUDA . I managed to get it compiled by changing the mpif90 wrapper using export OMPI_FC=pgfortran and exchanging other complier flags (essentially -mp instead of -openmp). Yet, now when I run the code, it gives a segmentation fault. Is this usual? why would this happen?

Have you tried debugging it with gdb?
 

Kieli

Member
It's a really neat topic. I'm finishing up my Master's degree in it right now. I love how it's a blend of life science and computer programming.

Good luck with the phone interview! The place I've been interviewing at is also a startup. This has been an interesting process, when I applied for internships during my last round of graduate school 5+ years ago, I just came in for a short interview and that was that. Times have definitely changed!

Do you feel this route (of Bioinformatics) is competitive salary-wise with typical software development jobs?

One of the reasons I didn't pursue life science academia/research was due to the depressed wages for decade+ (i.e. Masters, PhD, post-doc).
 

Aikidoka

Member
Have you tried debugging it with gdb?

I tried using the Portland Group Debugger, but I don't really know what to do with it. I know the general region of the code that's wrong but the debugger didn't give any useful errors. Just a SIGWINCH or something.

Also, I took a smaller program that solves an extremely similar problem but without any of the fancy openMPI or openMP stuff and tested it out as well. It works great with the ifort compiler but for pgfortran at some point it just says "floating point exception" and quits. Looking up all this stuff is a real pain - I may just try to brute force and write c code for CUDA and link it.

I need to have this done in a couple of days so I don't have much leniency in trying to debug all this bullshit. Though I did submit a ticket to the local HPC IT guys. Hopefully they can give a fix before Friday's over.
 
So question regarding pointers/free in C.

Lets say I have a Link List. Three nodes of value 7, 8, 9.

7 -> 8 -> 9

Now a user wants to remove the node with value of 9.

So lets say I do it this way:

Code:
function_name(parameters)

struct Node* temp = sentinel->next; //make a pointer to the start of the list

while (temp->value != 9) //loop temp through list until desired value reached
{
    temp = temp->next;
}

free(temp);
temp = NULL;

Will this do what I want it to do? Remove the node with the value of 9, and have the next pointer of the 8 node now be set to NULL?
 
So question regarding pointers/free in C.

Lets say I have a Link List. Three nodes of value 7, 8, 9.

7 -> 8 -> 9

Now a user wants to remove the node with value of 9.

So lets say I do it this way:

Code:
function_name(parameters)

struct Node* temp = sentinel->next; //make a pointer to the start of the list

while (temp->value != 9) //loop temp through list until desired value reached
{
    temp = temp->next;
}

free(temp);
temp = NULL;

Will this do what I want it to do? Remove the node with the value of 9, and have the next pointer of the 8 node now be set to NULL?

No. You're not setting the previous node's next-value. You're just setting the temp-variable, which holds a copy of the value of the next-variable. (Pointers are just regular values, too.) The trick is that you need to keep track of the last node somehow. Instead of looping through the nodes and checking their values, loop through the nodes and check the next node's value. That way, you still have the previous node and can set its value. (You'll get some special cases to deal with but nothing difficult.) If it's a double-linked list you don't need to do that since you have previous-pointer that does the work for you.

In this case, setting the previous node's next pointer to null is what you want to do, but it's not the general case. In general you want the previous node to point to whatever the node you're removing pointed to. What you're doing will only work when removing the last node.
 

Mr.Mike

Member
So I've started learning Go, and it seems pretty cool so far. One thing that's really cool is the defer statement, which as far as I can tell from a quick Google search only also appears in Swift. The book I'm reading does say that the defer statement is new with Go, so I guess Swift's defer is inspired by Go.

Basically what it does is take some code and sets it so that it will execute when the encompassing function returns. As an example, when opening multiple files you can use the defer statement to have them closed when the function returns. The benefit of this is that no matter how the function returns all opened files will be closed.

Code:
// a function that takes no parameters and returns an error value
func some_func() error { 
    f1, err := os.Open(someFile)
    if err != nil {
        return err    // failed to open any files, and will simply return
    }
    defer f1.Close()

    f2, err := os.Open(someOtherFile)
    if err != nil {
        return err    // f1.Close() will be executed before returning
    }
    defer f2.Close()

    return nil    // f2.Close() then f1.Close() will be called before returning, in that order.
}
 
So I've started learning Go, and it seems pretty cool so far. One thing that's really cool is the defer statement, which as far as I can tell from a quick Google search only also appears in Swift. The book I'm reading does say that the defer statement is new with Go, so I guess Swift's defer is inspired by Go.

Basically what it does is take some code and sets it so that it will execute when the encompassing function returns. As an example, when opening multiple files you can use the defer statement to have them closed when the function returns. The benefit of this is that no matter how the function returns all opened files will be closed.

Code:
// a function that takes no parameters and returns an error value
func some_func() error { 
    f1, err := os.Open(someFile)
    if err != nil {
        return err    // failed to open any files, and will simply return
    }
    defer f1.Close()

    f2, err := os.Open(someOtherFile)
    if err != nil {
        return err    // f1.Close() will be executed before returning
    }
    defer f2.Close()

    return nil    // f2.Close() then f1.Close() will be called before returning, in that order.
}
That's an alternative to RAII, which just runs some code whenever something goes out of scope. This appears in C++, D, C#, and Rust.

The advantage RAII has over defer is that if you try to return a file handle, the defer will still run.
 

Mr.Mike

Member
That's an alternative to RAII, which just runs some code whenever something goes out of scope. This appears in C++, D, C#, and Rust.

The advantage RAII has over defer is that if you try to return a file handle, the defer will still run.

I suppose it is pretty much the same thing, a bit more explicit. Although destructors (in C++ at least, I'm pretty sure) will be called at the end of any scope, and not just that of a function.

Presumably you would know better than to call defer to close a handle you intend to keep past the end of the function.
 

Ledbetter

Member
Eh, so what would happen if then someone went to the 8 node and tried to access its NEXT node? core dump?

I think Chainsaw meant that your code will only work with deleting the last element. Imagine you would like to delete the 8 node, when you set this node to null, the 7 node would point to null and the linked list would be broken, because now you don't have any pointer to the 9 node.

Before deleting the desired element, just make the previous node to point to whatever the desired node was pointing to. So, it wouldn't point to core dump, instead, it would point to null, because the last node that was deleted pointed to that.

The key to this is making the iteration check the next element value instead of the actual one.

I don't know if that makes sense.
 
I debated putting this in the "Applying for jobs is soul crushing and exhausting" thread, but I decided it made more sense to put it here.

I've got an interview that I'm particularly excited about coming up in the near future, and I have a question based on it and past interviews I have had.

Whenever they do whiteboarding, or ask you some kind of "thought" programming question, how quickly should you be able to crank out the code for it? Is it common to erase things and fix it as you are writing it? In some of my interviews I have felt kind of stupid because I had to sit there and think for a few seconds before writing something, whether that question was something like a Fibonacci algorithm, sorting algorithm, or whatever.

I guess what I'm asking is, should it be expected for you to immediately belt out code if someone said "Mergesort, go!", or is it fine if it takes you a minute, but you understand how Mergesort (or whatever the question is about) works?

Granted, I have had interviews where I have had to do this, or they might address a problem in my code and I fix it, or I don't know what they are getting at, and those interviews have been successful (either a job offer, or getting another interview).

I'm trying to prepare as well as I can for this interview I have coming up, but I'm just worried that I will miss something and blank out on it.

Depends on your skill level and the job and how anal they are about it. Some places just want to see your problem solving process and ability to think things through.

An approach I take sometimes is to just think of the quick and ugly way to do it, and tell them that its the naive approach, because then you write that out quickly and can talk through why it is not the best solution and how you would improve it.

Overall though I hate random algorithm questions. I don't mind problem solving questions, I just prefer when they are more real world problems than abstract, but that's just me.

Trying to figure out where to go next. This summer I will have what looks to be a painfully simple internship at a decent computer science company doing something involving html, adding content to the website, and hopefully some javascript. This is my last summer before being done with college (I graduate in December) and I am looking to learn something on the side of my internship.

The last few years of my life have had me working in web dev stuff (specifically front end). And I generally enjoy that and kind of want to pursue that at a specific company in town. On the other hand, my girlfriend wants me to move with her to Germany and I saw that there are some cool game developers located there and obviously gaming is a passion of mine.

If I had to break down my interests of what to do after I graduate at this point, it is:
50% Web Dev
25% Game Dev
15% Misc. OOP type work
10% Android App Development

Any recommendations for what I should pick up on the side this summer? I am worried about picking the wrong one and hurting my chances at getting a job in the other field. Or is that not a big issue?

My suggestion, create a GitHub account and start either contributing to open source projects or putting projects on it that show your interests. If you think you want to work on mobile then upload some sample mobile apps that show you know how to do more than set up a table view (suggestions make an app that pulls table data from a data base somewhere, or an app that does something with video).

Honestly same thing for games. Make something in Unity or Unreal or even from scratch if you're so bold.

Whichever way you go, work on stuff that is interesting to you and you'll be better at getting jobs that are interesting to you.

Yeah in my experience there'd a large chunk of time getting to the inevitable test, the programming test, and a final interview afterwards. Even for internships. I'm scared for the next time I have to take one of those.

My Girlfriend actually lives in Germany, we met studying abroad. She's coming here for several months to do a university internship so I'm lucky there. I included the whole story just in case someone had perspective on Computer Science in Germany.

I really don't know what to do regarding game programming vs Web dev. I feel like if I work on game dev that experience will at least transfer to more fields than predominantly working on front end Web stuff, but ive never been crazy about math and haven't learned about vectors yet.

Edit : thankfully that appears to come into play in Linear which I'm taking this fall in my last semester. Maybe this summer then I should just try to work on Web dev skills then? If so one last question: should I improve my front end skills or get to learning backend languages?

This book: http://www.amazon.com/dp/1435458869/?tag=neogaf0e-20

and

This book: http://www.amazon.com/gp/product/1568817231/?tag=neogaf0e-20

Are probably the two most important books on math any game developer should read. If you aren't comfortable with the material in these you will probably struggle as a game developer.
 

Koren

Member
Eh, so what would happen if then someone went to the 8 node and tried to access its NEXT node? core dump?
Totally unpredictable. I'd even say that you would be lucky if you got a segfault/core dump.

The 8 node still points towards the place in memory where the 9 node was, although it doesn't "exists" anymore.

So, just after "deleting" the 9, you'll probably still see it in the list! The compiler often won't forbid you to access an unallocated memory address. When the memory is used for something else, you'll get a different value, and the NEXT value of the zombie-node can turn from NULL to anything else, so you can have new elements in your list (including a cyclic list, why not).

You don't technically need to remember the previous node to perform the deletion, you can rather test the value of the following node, it may be easier (that's actually one of the reasons a sentinel can be useful compared to a simple pointer to the first element in the list). Something like this:
Code:
function_name(parameters) {
	struct Node* temp = sentinel;

	while(temp->next != NULL) {
		Node* nextone = temp->next;

		if (nextone -> value == 9) {
			temp->next = nextone->next;
			free(nextone);
		} else {
			temp = nextone;
		}
	}
}

(at each step, you either delete the following node if the value is 9, or advance one node in the list if the value is different from 9)
 
Totally unpredictable. I'd even say that you would be lucky if you got a segfault/core dump.

The 8 node still points towards the place in memory where the 9 node was, although it doesn't "exists" anymore.

So, just after "deleting" the 9, you'll probably still see it in the list! The compiler often won't forbid you to access an unallocated memory address. When the memory is used for something else, you'll get a different value, and the NEXT value of the zombie-node can turn from NULL to anything else, so you can have new elements in your list (including a cyclic list, why not).

You don't technically need to remember the previous node to perform the deletion, you can rather test the value of the following node, it may be easier (that's actually one of the reasons a sentinel can be useful compared to a simple pointer to the first element in the list). Something like this:
Code:
function_name(parameters) {
	struct Node* temp = sentinel;

	while(temp->next != NULL) {
		Node* nextone = temp->next;

		if (nextone -> value == 9) {
			temp->next = nextone->next;
			free(nextone);
		} else {
			temp = nextone;
		}
	}
}

(at each step, you either delete the following node if the value is 9, or advance one node in the list if the value is different from 9)

Ok so basically just use

temp->next->value and temp->next in my checks, rather than temp->value and temp. That way I can explicitly alter the NEXT pointer and do:

free(temp->next);
temp->next = NULL;
 

Koren

Member
Ok so basically just use

temp->next->value and temp->next in my checks, rather than temp->value and temp. That way I can explicitly alter the NEXT pointer
I think that's easier than keeping track of two nodes, yes.

and do:

free(temp->next);
temp->next = NULL;
If you do temp->next = NULL, you discard the 9, but also the nodes following the 9 if it's not the last one in the list.

The problem is that the nodes after the 9 are lost, but not freed (and there's no garbage collector to handle this).

This could be a memory leak if 9 isn't the last value.

If 9 is guaranted to be the last value, that's fine, but I'd put, *at the very least*, a comment to explain this...

And in fact, I would still use

Code:
struct Node* next = temp->next->next;

free(temp->next)

temp->next = next

Because if 9 is the last one, that would still result in a NULL, and if it's not, at least, you don't have a leak. It's probably easier to see that there's additional values in a list than find a memory leak. I know Valgrind can find those leaks, but I'm still reluctant to let any code that could result in a leak later because of a careless modification.
 

Koren

Member
So a linked list is perhaps a bad example then. This is for a BST and indeed the removed value would for sure be the last value in the branch.
You're probably fine in this case, yes, if you're sure you have reached a leaf (meaning both "next" pointers, the left one and the right one are NULL)

(I'm less fond of "sentinels" to point towards the root of a BST, though, because one children has no meaning, so you may have to deal differently with the "sentinel" and the other, real nodes.)

(and actually, I'm not sure you one can call those "sentinels" in fact, I've more often see the term "sentinel" used for a node that mark the end, not the beginning)
 
Because you can't use a function call in the right hand of a let rec to statically prevent recursive definitions that could trigger a bus error.

But in fact, it's even harder in my situation, but I can't fault OCaml. I have to use Caml Light, and it forbids even a constant declaration in the right hand side of a let rec, like in:
Code:
let rec a = 2 and f = function 0 -> 1 | x -> a* (f (x-1)) in f 5;;

(correct in OCaml, but forbidden in Caml Light)
Maybe I'm not understanding but only the function f needs to be recursively bound in that situation. If you could elaborate on what you mean, I would appreciate it.

ok, I see your point, now. The f# guidelines and a lot of the ocaml code I've read would have you put the let on a new line, so the if becomes indented as well but it's not excessively indented like your example might be if you tried that. In match expressions, I tend to put long expressions on a new line.

Code:
let Index elem v =
  let rec Aux = fun
    | a b when a>b -> failwith "Not found"
    | a b -> 
        let c = (a+b)/2 in
          if v.(c) = elem then c else
          if v.(c) > elem then (Aux a (c-1))
          else (Aux (c+1) b)
  in Aux 0 (vect_length v - 1);;
Notice I put the else at the beginning of the last line. I like this because it signifies a catch-all. It also parallels the use of "in": always at the end of a line except for the final part.

I understand your gripes with syntax being too flexible. Back when I still used Common Lisp on the regular I was pleased to know that standard indentation was Whatever Emacs Decided. Golang reaps the same canonical indentation because of gofmt, which automatically formats code. I think OCaml (and by extension, Caml Light) would benefit from such a standard tool.
 

Koren

Member
Maybe I'm not understanding but only the function f needs to be recursively bound in that situation. If you could elaborate on what you mean, I would appreciate it.
It was just a dumb example of what the Caml Light syntax forbid, the function itself is totally stupid.

I should look for a real example...

In match expressions, I tend to put long expressions on a new line.
That's an idea I could use...

Notice I put the else at the beginning of the last line. I like this because it signifies a catch-all. It also parallels the use of "in": always at the end of a line except for the final part.
I agree it's nice, although that means the else change place whether that's the last if or not (beside symetry, that makes line switching harder).

I understand your gripes with syntax being too flexible.
Most syntax are flexible, and I don't really have problems with Caml one. It's just I can't make my mind on a set of rules that will work 100% of the time (or at least 99%). I haven't had such a "hard" time with other languages I use.

I think OCaml (and by extension, Caml Light) would benefit from such a standard tool.
Probably, but I wonder if it's really possible to write one that make sense most of the time (and/or why it's not used if it exists).
 
You're probably fine in this case, yes, if you're sure you have reached a leaf (meaning both "next" pointers, the left one and the right one are NULL)

(I'm less fond of "sentinels" to point towards the root of a BST, though, because one children has no meaning, so you may have to deal differently with the "sentinel" and the other, real nodes.)

(and actually, I'm not sure you one can call those "sentinels" in fact, I've more often see the term "sentinel" used for a node that mark the end, not the beginning)

lol fuck I forgot about the leftmost child's right children. Ok I think I actually got this now. Thanks for the help.
 
Week 7 of Programming fundamentals and then no more programming for awhile at least. Can anyone help me out with this bit of code. It's supposed to be a JOption followed by a JFrame of random color where you input your name and it spits out a new JLabel.

My problem is that every time I enter my name it crashes. Any tips would be appreciated.

Code:
/*******************************************
*
* Unit7Assignment
* This program displays a dialog box shows
* a message and a randomly chosen color.
*******************************************/

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.util.*;
import java.util.Random;

public class BeersCharlesUnit7 extends JFrame
{
	private static final int WIDTH = 400;
  	private static final int HEIGHT = 400;
	private JTextField nameBox;
	private JLabel greeting;
	private String color;

    //CONSTRUCTOR
    public BeersCharlesUnit7()
    {
		setTitle("Color Changing Frame");
		setSize(WIDTH, HEIGHT);
		setLayout(new FlowLayout());
		setDefaultCloseOperation(EXIT_ON_CLOSE);
		createContents();
		setVisible(true);
	}//END CONSTRUCTOR

	//CREATE CONTENTS
	private void createContents()
	{

		JLabel namePrompt = new JLabel("What is your Name:");

		JTextField nameBox = new JTextField(24);

		greeting = new JLabel();

		Listener listener = new Listener();

		Container contentPane = getContentPane();

		Random ran=new Random();
		int c = ran.nextInt(5);
		switch(c)
		{

		    case 1:contentPane.setBackground(Color.GREEN);
		           namePrompt.setForeground(Color.BLUE);
		           color = "GREEN";
		           break;
		    case 2:contentPane.setBackground(Color.RED);
		           namePrompt.setForeground(Color.WHITE);
		           color = "RED";
		           break;

		    case 3:contentPane.setBackground(Color.WHITE);
		           namePrompt.setForeground(Color.BLACK);
		           color = "WHITE";
		           break;
		    case 4:contentPane.setBackground(Color.BLUE);
		           namePrompt.setForeground(Color.WHITE);
		           color = "BLUE";
		           break;
		    case 5:contentPane.setBackground(Color.YELLOW);
		           namePrompt.setForeground(Color.BLACK);
		           color = "YELLOW";
		           break;
		}//END SWITCH

		JOptionPane.showMessageDialog(null, "The following window color will be randomly chosen from Red, White, Yellow, Green, Blue\n\n Your color will be:" + color);

		add(namePrompt);
		add(nameBox);
		add(greeting);

		nameBox.addActionListener(listener);
	}//END CREATE CONTENTS

	//ACTION LISTENER
	private class Listener implements ActionListener
	{
		public void actionPerformed(ActionEvent e)
		{
			String message;
			message = "Thanks for playing" +
			nameBox.getText();
			greeting.setText(message);
		}//END ACTION
	}//END LISTENER

	public static void main(String[] args)
	{
		new BeersCharlesUnit7();
	}//END MAIN
}//END CLASS

edit found my problem.
 

Aikidoka

Member
O got a question about Fortran and C interoperability.

If I have a Fortran datatype like this:
Code:
  TYPE, PUBLIC :: integer_storage
     INTEGER, DIMENSION(:,:), ALLOCATABLE :: ival
     INTEGER, DIMENSION(:,:,:), ALLOCATABLE :: ival3
  END TYPE integer_storage

Can I declare a matching struct in C like this:
Code:
typedef struct{
  
  int * ival;
  int * ival3;
  
} intblks;

I guess the main question is if C automatically knows to use a 1-D array for a fortran multidimensional array since fortran stores arrays sequentially (in col-major order of course)
 

Koren

Member
Each language don't have a clue how the other uses the data.

But assuming I under tand what you're after, it can't work, ival in the C code isn't even a variable, it's a named field of a struct... That can't possibly link.

Beside, I've forgotten my Fortran, especially low-level details, but I doubt tbe variable hold the address of the first integer in the multidimensional array...
 

luoapp

Member
I just copy from gfortran doc page https://gcc.gnu.org/onlinedocs/gfortran/Derived-Types-and-struct.html
Code:
7.1.2 Derived Types and struct

For compatibility of derived types with struct, one needs to use the BIND(C) attribute in the type declaration. 

For instance, the following type declaration

      USE ISO_C_BINDING
      TYPE, BIND(C) :: myType
        INTEGER(C_INT) :: i1, i2
        INTEGER(C_SIGNED_CHAR) :: i3
        REAL(C_DOUBLE) :: d1
        COMPLEX(C_FLOAT_COMPLEX) :: c1
        CHARACTER(KIND=C_CHAR) :: str(5)
      END TYPE
matches the following struct declaration in C

      struct {
        int i1, i2;
        /* Note: "char" might be signed or unsigned.  */
        signed char i3;
        double d1;
        float _Complex c1;
        char str[5];
      } myType;

Derived types with the C binding attribute shall not have the sequence attribute, type parameters, the extends attribute, nor type-bound procedures. 
Every component must be of interoperable type and kind and may not have the pointer or allocatable attribute. 
The names of the components are irrelevant for interoperability.

As there exist no direct Fortran equivalents, neither unions nor structs with bit field or variable-length array members are interoperable.

This really reminds me why I decided to rewrite in C instead of link to C objs.
 

Aikidoka

Member
Each language don't have a clue how the other uses the data.

But assuming I under tand what you're after, it can't work, ival in the C code isn't even a variable, it's a named field of a struct... That can't possibly link.

Beside, I've forgotten my Fortran, especially low-level details, but I doubt tbe variable hold the address of the first integer in the multidimensional array...

ival is also the name of the field of the type declared in Fortran. TYPE is the equivalent of struct in C. As far as what the first element holds, I think you may be able to just pass the pointer of the first element into the C function.

I just copy from gfortran doc page https://gcc.gnu.org/onlinedocs/gfortran/Derived-Types-and-struct.html


This really reminds me why I decided to rewrite in C instead of link to C objs.
What they say about not being able to have Allocatable attribute makes zero sense and would damn pretty all fortran code.
 

Aikidoka

Member
Hope this isn't bad form, but I'm gonna rant a bit on how - er- sloppy my advisors programmed this code I have to work on.

Now I don't know how C works very well (as is probably clear from my past posts), but in FORTRAN, you can write a very simple module that controls the precision of your code like so:
Code:
module precision
  
  INTEGER, PARAMETER :: pr = SELECTED_REAL_KIND(15)
  
end module
So now all you have to do is write code via:
Code:
real(pr) :: *variables*
and now the precision of the entire program is controlled by this module (double precision in this case, also for arithmetic you need to do _pr for constants.)

However, the code I have to work on is not set up this way. Instead, they hard coded the double-precision into everything (that's ~100,000 lines of code btw). On top of that, the required accuracy of the calculation doesn't even need to be in double precision - we only give a shit about the first decimal place! So, basically in order to, say, go to single precision for GPU calculations, I'd need to go through line-by-line and make all the changes by hand. Sure search and replace would work, but it's still a pain. And what's even more frustrating is how FORTRAN doesn't give a fuck about things like capitalization and they certainly weren't consistent in how they programmed it.

EDIT: also the kicker is: When I learned about this whole precision module thing (months ago), they basically wrote it off as pointless and unnecessary. Fast-forward to this month and I'm explaining that GPUs are actually like 4x speedup for single-precision, and they are all like "yeah go ahead and change it - we don't need double precision".

I suppose, I would like to know if C or C++ has a similar way for readily changing precision.

Also, who really names their variables 'x' and 'xx' and 'xxx', etc....?
 

luoapp

Member
Can you just add the conversion at the beginning and the end of your C functions and keep everything inside single precision? It shouldn't take too long. As for the reason, SELECTED_REAL_KIND was included in Fortran 95 standard, but the code may very well be older than that. For C/C++, you can use typedef.
 

Aikidoka

Member
Can you just add the conversion at the beginning and the end of your C functions and keep everything inside single precision? It shouldn't take too long. As for the reason, SELECTED_REAL_KIND was included in Fortran 95 standard, but the code may very well be older than that. For C/C++, you can use typedef.

I'm not sure what you mean by conversion at the beginning and end of C program. Can C just convert the types of all the fortran input arguments really easily?
 

luoapp

Member
I'm not sure what you mean by conversion at the beginning and end of C program. Can C just convert the types of all the fortran input arguments really easily?

Not automatically, but not too bad even doing it manually. You get an array of double from Fortran and generate a float array of same size then fill it up with the numbers from the double array. It should be very quick.

Now I think about it, if most of computation is done on GPU, you may not need explicit conversion. You don't need convert non-array variables, since compiler should be able to do it for you. As for arrays, you can write a wrapper for cudaMemcpy and hide the conversion.
 
So, any Data Analytics/Science peeps here?

I'm delving head first into R (statistical coding) after having learnt SQL
a bit of Python (though I'm going to put a hold on the Python stuff in favor of R). Any insight would be greatly appreciated.
 

Aikidoka

Member
Not automatically, but not too bad even doing it manually. You get an array of double from Fortran and generate a float array of same size then fill it up with the numbers from the double array. It should be very quick.

Now I think about it, if most of computation is done on GPU, you may not need explicit conversion. You don't need convert non-array variables, since compiler should be able to do it for you. As for arrays, you can write a wrapper for cudaMemcpy and hide the conversion.

So If I'm understanding you, then a statement like:
Code:
for(int i=0; i<DIM; i++)
       array_of_floats[i] = array_of_doubles[i];
is a valid statement in C and will be type converted by itself?
Or, I guess I could just do a float( ....) explicit type-cast. Yeah, I suppose that's a simple quick fix. Though, I could run into memory problems by having to make two copies of arrays that, for the real applications, are already too big to fit on a single CPU.
 

luoapp

Member
So If I'm understanding you, then a statement like:
Code:
for(int i=0; i<DIM; i++)
       array_of_floats[i] = array_of_doubles[i];
is a valid statement in C and will be type converted by itself?
Or, I guess I could just do a float( ....) explicit type-cast. Yeah, I suppose that's a simple quick fix. Though, I could run into memory problems by having to make two copies of arrays that, for the real applications, are already too big to fit on a single CPU.

Basically, yes. You can free the memory afterwards. But, I am a little surprised to hear memory size will be an issue, I mean, memory is dirty cheap these days. Does Titan only give you 2GB per core?
 

Aikidoka

Member
Basically, yes. You can free the memory afterwards. But, I am a little surprised to hear memory size will be an issue, I mean, memory is dirty cheap these days. Does Titan only give you 2GB per core?

Yeah, the CPU has 32 GB total over 16 cores, and it looks like the Tesla K20 has only 6GB of memory. So, I guess, I'll need to do some pipeline-ing and perhaps, have the CPU do things in parallel too. I have not run this myself on Titan (doing all prototyping on a machine that's not so expensive), but when the group ran it a few years ago, they said that some of the arrays couldn't all fit on one PE and my work is just making everything more compute intensive.

Now, Summit on the other hand, will have a >512GB on memory per node with somewhere around 3-5 Volta GPUs per node (it's still under NDA, i think). But, I probably won't be with the group in 2018 to mess with it, unfortunately (well, perhaps fortunately depending on how much I end up enjoying this kind of thing).
 

Koren

Member
ival is also the name of the field of the type declared in Fortran.
Yes, but if I'm not mistaken, the name of the fields don't matter, just the bytes offset (assuming identical alignment mechanism but pointers are safe)

I thought yesterday evening that he wanted to share a variable, but I now understand it may be to call a function.

I still tbink n-D arrays store more than ints, anyway.
 
Odd question but is anyone familiar with Processing?

This might be a general question but how would I go about incrementing something in a sequence? So say I wanted to preform a task 100 times but on every instance, say a multiple of 10 an event would take place.

Would something like:

for ( int i = 1; i >100; i++)

if ( int i/10 = int)


I know this isn't exactly clear but hopefully this helps narrow down what I'm trying to explain.

Maybe I shouldn't be using a for loop and should just be incrementing the object:

int i++;

and then make a call to have an event happen when then incrementation reaches a number?

void draw(){

int i++;

if (int i/10=int){
ellipse(int i, 20, 20, 20);
}


My knowledge of programming is very basic and i'm working on a final project so if i'm doing something wrong or not explaining something fully please let me know.

Edit: Just tried it and this wouldn't work since int isn't being declared as anything. Any ideas on how I would make this detect a whole number?
 

peakish

Member
Odd question but is anyone familiar with Processing?

This might be a general question but how would I go about incrementing something in a sequence? So say I wanted to preform a task 100 times but on every instance, say a multiple of 10 an event would take place.

Would something like:

for ( int i = 1; i >100; i++)

if ( int i/10 = int)


I know this isn't exactly clear but hopefully this helps narrow down what I'm trying to explain.

Maybe I shouldn't be using a for loop and should just be incrementing the object:

int i++;

and then make a call to have an event happen when then incrementation reaches a number?

void draw(){

int i++;

if (int i/10=int){
ellipse(int i, 20, 20, 20);
}


My knowledge of programming is very basic and i'm working on a final project so if i'm doing something wrong or not explaining something fully please let me know.
A for loop is best if you're looping over a known number of iterations.

Use the 'modulus' operator ('%' in C, basically a remainder operator) to do things every so often:
Code:
// 1 % 5 == 1
// 5 % 5 == 0

int i = 5;
if (i % 5 == 0) {
    ...
}
 
A for loop is best if you're looping over a known number of iterations.

Use the 'modulus' operator ('%' in C, basically a remainder operator) to do things every so often:
Code:
// 1 % 5 == 1
// 5 % 5 == 0

int i = 5;
if (i % 5 == 0) {
    ...
}

I was just looking into this operator. Thank you so much!
 
Top Bottom