• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Somnid

Member
I'm having a hell of time finding resources for USB programming.

All I want to do is to be able to create an app that can send messages to an Android app over USB, like some silly hello world chat app. I figured it would be covered in an Android USB 101 tutorial but as I'm finding there's like nothing about how to go about this. The most I can find is using ADB to setup a TCP server that's tunneled over USB which isn't what I want. Does anyone know of some simple demo code or other resource that shows how to do this?
 
I'm going to start applying for some student internships.

Would anyone be willing to look over my resume and provide some constructive feedback about the content?
 

Pau

Member
So I'm using Netbeans and added a jar to the project library to be able to parse JSON files. But when I import the classes, it tells me the package does not exist.

I managed to do this for another (smaller) project and didn't have any problems. This is also a project hosted on Dropbox if that makes any difference?
 

Stumpokapow

listen to the mad man
I've been a programmer for a long time, but I haven't done a large scale web project for a while. I want to investigate using npm for client-side front-end package management. I am not using Node.js for anything related to this project. Basically I just want to use npm to manage versions on the JS libraries the project includes. I have a bunch of dependencies, and I want to keep them up to date. I have never used npm before.

I created a test folder (~/test/), and used npm to install the libraries I want: bootstrap, jquery, d3, js-cookie, crossfilter, clipboard, etc. I have the packages in a package.json file, so I know for future deployments it's no problem to use the package.json to do the download. In ~/test/, I have node_modules/. Under node_modules/, I have a variety of subfolders for each package I've installed.

Let's take, for example, just one dependency: jQuery.

npm install jquery --save
<stuff happens>
cd node_modules/jquery
ls
Code:
AUTHORS.txt  bower.json  dist  external  LICENSE.txt  package.json  README.md  src

Here's where I get confused. The actual file I want is ~/test/node_modules/jquery/dist/jquery.min.js. But there's an enormous amount of other stuff sitting around. The src folder, the external folder, the other stuff in the dist folder. What I want to do is install just the files I need to specifically use the library in my web project.

For just jquery, it's no problem, I can just have a make file that copies just jquery.min.js into the web project folder. But with a lot of libraries, it's a lot of work. And it seems like for everyone raving about how great npm is for front-end package management that there would be some automated way to deal with this.

I looked into the package.json for each library I was installing. I found that about half of them had a key "browser" which pointed to the file I wanted, and about half had a key "main" which pointed to the file I wanted, and some had neither or had one of them but it pointed to something other than the file I don't want. In addition, a few libraries come with css files and other things I need, and those weren't listed properly in the package.json file. Finally, the package.json file pointed to the non-minified versions. Normally I'd think "Well I just minify them myself", but the packages come with minified versions, so why exactly do I need to rebuild? So it's not like I can just parse the package.json file in each folder and copy whatever I find there.

This blog post suggests maybe actually npm sucks at this use case and lol oh well.

What am I missing here? What's the actual step I should use to deploy the files I got from the downloaded npm packages into my web site's folder? Some links suggest Browserify, but that seems to not quite be right, that seems to be for compiling multiple modules into one package, which is not what I want to do, and it still seems to require me to manually do a lot of heavy lifting to tell it how to move the stuff.
 

Koren

Member
Just wanted to say that Beamer is a shitty, really shitty hack.

Sometimes, you need to say things to sleep better ;)

After all those years, I still find stupids things. Today finding: you can't use the character % in a piece of code inside an uncover object (at least inside a tikz object, I have to dig to see what exactly trigger the bug). Somehow, it process that as a comment.
 

Somnid

Member
I've been a programmer for a long time, but I haven't done a large scale web project for a while. I want to investigate using npm for client-side front-end package management. I am not using Node.js for anything related to this project. Basically I just want to use npm to manage versions on the JS libraries the project includes. I have a bunch of dependencies, and I want to keep them up to date. I have never used npm before.

I created a test folder (~/test/), and used npm to install the libraries I want: bootstrap, jquery, d3, js-cookie, crossfilter, clipboard, etc. I have the packages in a package.json file, so I know for future deployments it's no problem to use the package.json to do the download. In ~/test/, I have node_modules/. Under node_modules/, I have a variety of subfolders for each package I've installed.

Let's take, for example, just one dependency: jQuery.

npm install jquery --save
<stuff happens>
cd node_modules/jquery
ls
Code:
AUTHORS.txt  bower.json  dist  external  LICENSE.txt  package.json  README.md  src

Here's where I get confused. The actual file I want is ~/test/node_modules/jquery/dist/jquery.min.js. But there's an enormous amount of other stuff sitting around. The src folder, the external folder, the other stuff in the dist folder. What I want to do is install just the files I need to specifically use the library in my web project.

For just jquery, it's no problem, I can just have a make file that copies just jquery.min.js into the web project folder. But with a lot of libraries, it's a lot of work. And it seems like for everyone raving about how great npm is for front-end package management that there would be some automated way to deal with this.

I looked into the package.json for each library I was installing. I found that about half of them had a key "browser" which pointed to the file I wanted, and about half had a key "main" which pointed to the file I wanted, and some had neither or had one of them but it pointed to something other than the file I don't want. In addition, a few libraries come with css files and other things I need, and those weren't listed properly in the package.json file. Finally, the package.json file pointed to the non-minified versions. Normally I'd think "Well I just minify them myself", but the packages come with minified versions, so why exactly do I need to rebuild? So it's not like I can just parse the package.json file in each folder and copy whatever I find there.

This blog post suggests maybe actually npm sucks at this use case and lol oh well.

What am I missing here? What's the actual step I should use to deploy the files I got from the downloaded npm packages into my web site's folder? Some links suggest Browserify, but that seems to not quite be right, that seems to be for compiling multiple modules into one package, which is not what I want to do, and it still seems to require me to manually do a lot of heavy lifting to tell it how to move the stuff.

Typically in large project there is some manual work to setup your paths for this reason, there's not necessarily an easy programmatic way to grab only the things you want. These are typically added to the build script or more cleanly stuck in some JSON file and imported and have to be updated if you add or remove dependencies. NPM can be used for everything but most packages assume you are using them with node.js and will define things in "main" that are the main require entry point.

For front-end packages most web devs use bower (https://bower.io/), it's the same idea but all the packages are front-end so I'd recommend you use that instead. The bower.json will look very similar and most decent packages have a "main" that lists the files you should stick in the page. You can use something like https://www.npmjs.com/package/main-bower-files to pick them up for your build script concatenation, minification and what-not.
 

Stumpokapow

listen to the mad man
Okay. Good to know I wasn't missing anything super obvious. I'll look into bower, but knowing that anything is going to require a bunch of fiddling on a per-package basis, I'll probably just manually update stuff as I'm doing now. I didn't plan on doing anything with the libraries in terms of having a build process.

I just thought package management for js front-end stuff had gotten good enough to have a push-button solution. Thanks. I'll check again in a year :)
 
For front-end packages most web devs use bower (https://bower.io/), it's the same idea but all the packages are front-end so I'd recommend you use that instead. The bower.json will look very similar and most decent packages have a "main" that lists the files you should stick in the page. You can use something like https://www.npmjs.com/package/main-bower-files to pick them up for your build script concatenation, minification and what-not.

The first part might have been true so time ago, but Bower has been slowly (and sometimes more quickly) dying in the past few years despite some attempts of keeping it alive. Meanwhile every (and I mean every single one, even those that claim that you should install them with bower like many Angular modules) module are available on NPM.

Stump, try a module bundler like http://browserify.org/ (or Webpack or Rollup). There's some learning curve, but after a short while you can stop worrying about your deps and instead just do stuff.

For example install some deps:

Code:
npm install browserify babelify watchify jquery d3 ... --save


Your main file will look like this (lets say app.js)

Code:
import $ from "jquery";
import bootstrap from "bootstrap";
import d3 from "d3"
import jsCookie from "js-cookie";

$("h1").html("foobar");

and add this snippet to your package.json

Code:
  "scripts": {
    "watch:js": "watchify -v -t babelify path/to/app.js -o path/to/app.bundle.js -d",
   },

and start watching for changes with

Code:
npm run watch:js

and you include the bundle to your html file like this

Code:
<script src="app.bundle.js"></script>

and now every change you made refreshes the bundle and you can just reload your page. Want to install more dependencies? npm install. Want to remove dependencies. npm install.

Autowiring your dependencies to your HTML files with some Grunt task or manually inserting your files from "bower_components" or "vendor" folder might sound like fun, but in the end just make your life miserable. Meanwhile just using modules is cruise control for cool.

edit: Want to learn everything you wanted to learn from modern web development? Join us in http://www.neogaf.com/forum/showthread.php?t=756776, the Web Dev OT
 

Armaly

Member
Can you guys recommend me material to get a better understanding of C++ pointers? Been using java for years and i'm having trouble understanding this.

Currently I have a member of a struct that's a pointer and i'm trying to later set that pointer to something in a method. I found a stackoverflow answer but I don't really understand how they're explaining it so I want to start from the beginning.
 

kingslunk

Member
Can you guys recommend me material to get a better understanding of C++ pointers? Been using java for years and i'm having trouble understanding this.

Currently I have a member of a struct that's a pointer and i'm trying to later set that pointer to something in a method.

A pointer is a variable that has the value of a memory address of another variable.

In laymen terms: You have a variable that points to a section of memory that contains some data. Example using structs

Code:
struct example {
    int *myPointer;
}

so lets say you have an object of example called foo. Foo holds a pointer of integer type (which means it can point to memory that is an integer)

so lets say we have our object of the struct example and a few integers.

Code:
example foo; 
int bar = 5;
int bar_2 = 6;

we want foo's integer pointer (myPointer) to point to bar's address.

bar right now returns the value 5 not it's address and myPointer needs an address to point to so we cant do myPointer = bar. We need bar's address.

To do this we use the "reference" symbol denoted by an ampersand &.

&bar returns the address of bar.

so if we do

Code:
myPointer = &bar;  // my integer pointer myPointer now points to bar's address.

If we return myPointer right now it'll return an address in memory. If we want to know the value inside of the block of memory we have to "dereference" the pointer which is denoted by the asterisk *.

so *myPointer will return bar's value 5.

Quick rundown of everything.
Code:
foo.myPointer =/= bar    // will not work it needs  bar's address not its value
foo.myPointer = &bar // myPointer now points to bar's address
*foo.myPointer // now returns the value stored in the pointer's address.

Now if you dereference myPointer and change the value bar's value will also change because its changing the value in memory.

Code:
*foo.MyPointer = 900; 
bar will now return 900.

Overall this is a pretty good tutorial of pointers:
http://www.cplusplus.com/doc/tutorial/pointers/

You can pm me if you have any specific questions or if something is confusing. Tried to give you a quick easy rundown to get you started.

You also need to make sure you delete pointers/memory when you are no longer using them. This is beyond the scope of what I typed.

If you're learning c++ I highly recommend reading Scott Meyer's Effective C++ book.
 

BeforeU

Oft hope is born when all is forlorn.
Guys I am thinking of enrolling into Xamarin University

https://www.xamarin.com/university

just to give you a little background, I did computer engineering and right now I work as a Application developer. my job isn't super technical when it comes to programming. And that's what I am most afraid of. I want to advance in better job, but everything requires so much programming knowledge which I just don't have. I did Java and little bit of C in university but that's about it.

So my goal is to learn mobile app development not just for better job in future but may be to start my own start up.

now considering all this. Do you think Xamarin University is a good way to start. The fee is about $2k for a year.
 

vypek

Member
Guys I am thinking of enrolling into Xamarin University

https://www.xamarin.com/university

just to give you a little background, I did computer engineering and right now I work as a Application developer. my job isn't super technical when it comes to programming. And that's what I am most afraid of. I want to advance in better job, but everything requires so much programming knowledge which I just don't have. I did Java and little bit of C in university but that's about it.

So my goal is to learn mobile app development not just for better job in future but may be to start my own start up.

now considering all this. Do you think Xamarin University is a good way to start. The fee is about $2k for a year.

Personally, before starting a paid path, I'd try some free ones. How about the mobile app development courses on Udacity? You can do all of those for free instead of doing paid versions of them on the site. The android one is backed by Google.
 

BeforeU

Oft hope is born when all is forlorn.
Personally, before starting a paid path, I'd try some free ones. How about the mobile app development courses on Udacity? You can do all of those for free instead of doing paid versions of them on the site. The android one is backed by Google.

wow great stuff. I was just checking their site.

Thanks
 

Stumpokapow

listen to the mad man
Stump, try a module bundler like http://browserify.org/ (or Webpack or Rollup). There's some learning curve, but after a short while you can stop worrying about your deps and instead just do stuff. [detailed example]

So, it's a fairly large web app with about a dozen dependencies (largest: jquery, bootstrap, d3, crossfilter, dc, topojson) + a bunch of modules written just for the project.

My instinct would be that bundling is not especially useful for the project because the costs saved (fewer requests to load the JS modules) aren't really equal to the costs paid (larger download--especially for pages that don't use most of the modules). I dunno want the ideal caching tradeoff is there. Do you know of any sources that discuss this problem?

Right now, our build process is that every template has some metadata for which JS modules it asks for. Those JS modules are then injected into the HTML as needed. So we keep everything stored as separate minified js files. We have the versions we're using included in our source control, so production deployments download them all from source control.

Am I really going to get a big performance benefit by setting up this system and switching to a single bundled js module?
 

Somnid

Member
So, it's a fairly large web app with about a dozen dependencies (largest: jquery, bootstrap, d3, crossfilter, dc, topojson) + a bunch of modules written just for the project.

My instinct would be that bundling is not especially useful for the project because the costs saved (fewer requests to load the JS modules) aren't really equal to the costs paid (larger download--especially for pages that don't use most of the modules). I dunno want the ideal caching tradeoff is there. Do you know of any sources that discuss this problem?

Right now, our build process is that every template has some metadata for which JS modules it asks for. Those JS modules are then injected into the HTML as needed. So we keep everything stored as separate minified js files. We have the versions we're using included in our source control, so production deployments download them all from source control.

Am I really going to get a big performance benefit by setting up this system and switching to a single bundled js module?

Concatenating and minifying is cheap and easy and you should do it to keep requests small and the number of requests down (because > 4 at once will block). Module systems are bit of a different beast, I generally don't advocate for them because they tend to add complexity and not really buy you a whole lot, you typically only need a little bit of manual work on the build and you'll produce better performing code. If you have enough stuff that organization and load order is a huge issue, you have bigger problems.

Here's an interesting article that does a couple of comparisons on different module systems: https://nolanlawson.com/2016/08/15/the-cost-of-small-modules/
 

Stumpokapow

listen to the mad man
Thanks. I left web dev around 2012 (so after jQuery and before less/sass/Node/react/angular/meteor/etc became compulsory). So there's a lot of workflow stuff I just never got the skills for. :)
 
Does anyone know Backus Naur Form? If so, pm me because I'm doing homework on BNF. I'm translating both cin statement and cout statement to BNF. I attempted them and want to share my solutions with someone in order to see if it's correct or not.
 
So, it's a fairly large web app with about a dozen dependencies (largest: jquery, bootstrap, d3, crossfilter, dc, topojson) + a bunch of modules written just for the project.

My instinct would be that bundling is not especially useful for the project because the costs saved (fewer requests to load the JS modules) aren't really equal to the costs paid (larger download--especially for pages that don't use most of the modules). I dunno want the ideal caching tradeoff is there. Do you know of any sources that discuss this problem?

Right now, our build process is that every template has some metadata for which JS modules it asks for. Those JS modules are then injected into the HTML as needed. So we keep everything stored as separate minified js files. We have the versions we're using included in our source control, so production deployments download them all from source control.

Am I really going to get a big performance benefit by setting up this system and switching to a single bundled js module?

Bundlers like Browserify do much more than minifying and concatenating, their main goal is to bring support for modules to browsers because browsers aren't there quite yet and won't be in years. Userland solved this issue years ago by introducing the module bundlers, that transform your (Node and other) modules into a format that the browser can use. During that transformation process you can do tons of other things, like minifying your code, or detect environment variables, or transform your code from JavaScript of tomorrow to JavaScript of today.

The cost of bigger bundle is most likely rather minimal, user will eventually have to load them all anyway; single download is cached for a certain period of time, so it's a one time cost. Modern bundlers like Rollup can do advanced things like tree shaking, which remove unused code from your sources. If you are worried about bigger bundle sizes, you advanced bundlers like Webpack can create page specific bundles.

I generally don't advocate for them because they tend to add complexity and not really buy you a whole lot, you typically only need a little bit of manual work on the build and you'll produce better performing code.

Do they add complexity? It depends. I outlined all the steps needed for a basic Browserified build, it's barely a few commands. Manual work sounds like terrible waste of time, when others have solved the problem tens time before, tens times better than you can. Better performing code has nothing to do with modules in general.

If you need to screw one screw, using the other end of the spoon is okay. If you have to screw thousands of screws (like Stump said himself, a fairly complex project), leaning to use actual tools made for it will save you tons of pain and time in the end, even if there's some investment in the beginning. Bootstrap is built on modules, d3 is built on modules, topojson is on built on modules...

Not only that, when you pick up the best practices, when your project ends up in someone elses hands they don't need to worry about solving the puzzle with your Own Better BuildProcess^TM that only you know the details behind (we have all been there, right?).

Not only that, but third party dependencies in source control is a great way to introduce bit rot and ensuring that keeping them up to date is hard as possible. Not to mention endless lines of codes in your source control history that aren't related to your own code. NPM (and every other package manager out there) get tons of undeserved hate, mostly because people don't understand that it's actually pretty friggin' hard to make package managers.

The most common reasoning of having dependencies in source control is the fear of the said dependencies disappearing. When you say that it's most likely not going to happen, the "left-pad" incident gets often quoted as the prime example. When you actually think more than just "hurr durr 13 lines broke builds", the following happened:

1. Builds and downloads were broken... for 7 minutes
2. All existing code worked just fine
3. NPM ensured that the "left-pad" type of case won't ever happen again.

I have used NPM modules and npm modules only for (5?) years now and I have never witnessed a dependency just disappearing. I am not very worried about the future either; if NPM is down in 15 years in the future, I really have to think hard about situations where I a) couldn't find the said dependencies anywhere b) wanted to use that code anyway.

Write modules, stop worrying, enjoy life and best of all enjoy web development, because after you get over the sea of trolls and grumpy old men, it's actually super fun. If it wasn't I most likely wouldn't do it as my day job.
 
Man, my job is kinda pushing me towards learning javascript and some web dev but that shit is crazy as hell. I found a ''front-end handbook'' and it was like 130 pages with things you apparently need to know these days, from builders to transpilers and post-css processors and god knows what else.
 
Man, my job is kinda pushing me towards learning javascript and some web dev but that shit is crazy as hell. I found a ''front-end handbook'' and it was like 130 pages with things you apparently need to know these days, from builders to transpilers and post-css processors and god knows what else.

Like I said in my previous post, you still just use the spoon. You can write all your code in a single file and you can copy and paste third party dependencies from the internet. You can write just plain CSS and spend hours on making sure that every vendor prefix and browser quirk is covered. You can write just plain old EcmaScript 5 code and you can just link .html pages to .html pages. Just know that you'll be responsible for that spaghetti forever.

Browsers is a HUGE ASS PLATFORM. There's like A BILLION OF THEM. And each of every one of them have billions of quirks. Web development is hard, not because of the tools, but because of the targeted platform. If you could press a reset switch and just make everyone use the most modern browser always, there wouldn't be any problems. But you really, really can't. Which is why the tools exist: to make it easier for those that spend day in and day out building complex projects for complex clients or those that just want to create the best code possible in the shortest amount of time. They are a lifesaver, they really are.
 
Like I said in my previous post, you still just use the spoon. You can write all your code in a single file and you can copy and paste third party dependencies from the internet. You can write just plain CSS and spend hours on making sure that every vendor prefix and browser quirk is covered. You can write just plain old EcmaScript 5 code and you can just link .html pages to .html pages. Just know that you'll be responsible for that spaghetti forever.

Browsers is a HUGE ASS PLATFORM. There's like A BILLION OF THEM. And each of every one of them have billions of quirks. Web development is hard, not because of the tools, but because of the targeted platform. If you could press a reset switch and just make everyone use the most modern browser always, there wouldn't be any problems. But you really, really can't. Which is why the tools exist: to make it easier for those that spend day in and day out building complex projects for complex clients or those that just want to create the best code possible in the shortest amount of time. They are a lifesaver, they really are.

Yeah, i can see why it all exist, its just kinda overwhelming when you get told to ''go learn some javascript''. :lol

My day job is in business intelligence, building reports that get integrated into a software solution we provide to customers. We are reaching some limits in what we can do with tools like Reporting Services and Tableau to visualize data, so we're considering to start using d3.js or one of the libraries build on top of it. Luckily we dont have to worry about supporting old browsers, because my employer just tells client that they cant use our software unless they use a browser that is at least IE11 and preferably the latest Chrome or Firefox.
 

Somnid

Member
Do they add complexity? It depends. I outlined all the steps needed for a basic Browserified build, it's barely a few commands. Manual work sounds like terrible waste of time, when others have solved the problem tens time before, tens times better than you can. Better performing code has nothing to do with modules in general.

If you need to screw one screw, using the other end of the spoon is okay. If you have to screw thousands of screws (like Stump said himself, a fairly complex project), leaning to use actual tools made for it will save you tons of pain and time in the end, even if there's some investment in the beginning. Bootstrap is built on modules, d3 is built on modules, topojson is on built on modules...

Not only that, when you pick up the best practices, when your project ends up in someone elses hands they don't need to worry about solving the puzzle with your Own Better BuildProcess^TM that only you know the details behind (we have all been there, right?).

Not only that, but third party dependencies in source control is a great way to introduce bit rot and ensuring that keeping them up to date is hard as possible. Not to mention endless lines of codes in your source control history that aren't related to your own code. NPM (and every other package manager out there) get tons of undeserved hate, mostly because people don't understand that it's actually pretty friggin' hard to make package managers.

The most common reasoning of having dependencies in source control is the fear of the said dependencies disappearing. When you say that it's most likely not going to happen, the "left-pad" incident gets often quoted as the prime example. When you actually think more than just "hurr durr 13 lines broke builds", the following happened:

1. Builds and downloads were broken... for 7 minutes
2. All existing code worked just fine
3. NPM ensured that the "left-pad" type of case won't ever happen again.

I have used NPM modules and npm modules only for (5?) years now and I have never witnessed a dependency just disappearing. I am not very worried about the future either; if NPM is down in 15 years in the future, I really have to think hard about situations where I a) couldn't find the said dependencies anywhere b) wanted to use that code anyway.

Write modules, stop worrying, enjoy life and best of all enjoy web development, because after you get over the sea of trolls and grumpy old men, it's actually super fun. If it wasn't I most likely wouldn't do it as my day job.

I'd argue none of this is actually true. I'm not building the most clever build system and that's the point, I'm just doing extremely simple script ordering and partitioning if there are multiple pages. In fact it improves everything you talk about. There is no learning curve, any junior dev can see that scripts X, Y, Z are loaded into the page and behave as expected, no require syntax, no hidden dependency graph, no module loader black boxes, no performance impact of any sort. There are significantly fewer build issues because it's kept small, enough to get the core benefit. Often you can make a few small changes and run it without the build system at all. Very approachable. I believe build systems are some of the more overused things in modern web development. You simply don't need that much of them, the problems you are solving are small and the solution is big.
 

Antagon

Member
Here at work they seem to expect that I learn some front-end development as well. Development is going to be for Angular and Angular 2, plus a bit Vue.js from an older project. This also means learning Javascript and Typescript and getting a grip on how gulp,webpack and npm work.

The hard thing is that lately they've been just pushing more and more projects on our team, all with different domains and technology. There's also some Mirth Connect work involved, and some development on Liferay. This is next to our older backend work on Wicket (+ spring and Hibernate) based applications.

I feel like I'm pretty good at picking up new stuff, but this really starts to get too much.
 

JaMarco

Member
A pointer is a variable that has the value of a memory address of another variable.

In laymen terms: You have a variable that points to a section of memory that contains some data. Example using structs

Code:
struct example {
    int *myPointer;
}

so lets say you have an object of example called foo. Foo holds a pointer of integer type (which means it can point to memory that is an integer)

so lets say we have our object of the struct example and a few integers.

Code:
example foo; 
int bar = 5;
int bar_2 = 6;

we want foo's integer pointer (myPointer) to point to bar's address.

bar right now returns the value 5 not it's address and myPointer needs an address to point to so we cant do myPointer = bar. We need bar's address.

To do this we use the "reference" symbol denoted by an ampersand &.

&bar returns the address of bar.

so if we do

Code:
myPointer = &bar;  // my integer pointer myPointer now points to bar's address.

If we return myPointer right now it'll return an address in memory. If we want to know the value inside of the block of memory we have to "dereference" the pointer which is denoted by the asterisk *.

so *myPointer will return bar's value 5.

Quick rundown of everything.
Code:
foo.myPointer =/= bar    // will not work it needs  bar's address not its value
foo.myPointer = &bar // myPointer now points to bar's address
*foo.myPointer // now returns the value stored in the pointer's address.

Now if you dereference myPointer and change the value bar's value will also change because its changing the value in memory.

Code:
*foo.MyPointer = 900; 
bar will now return 900.

Overall this is a pretty good tutorial of pointers:
http://www.cplusplus.com/doc/tutorial/pointers/

You can pm me if you have any specific questions or if something is confusing. Tried to give you a quick easy rundown to get you started.

You also need to make sure you delete pointers/memory when you are no longer using them. This is beyond the scope of what I typed.

If you're learning c++ I highly recommend reading Scott Meyer's Effective C++ book.
What situations would you need to mess with variable's memory addresses?
 

Ledbetter

Member
So my dad just sent me a link to this coding package. Is this a good deal? I started coding in college, but dropped it because I wasn't in a good place at that time. Are these languages useful or should I look elsewhere?

https://store.idropnews.com/sales/e...AppleMac_B_SL_Sale_Giveaways&utm_medium=email

If you're looking to learn the basics of web development, it looks good. It doesn't seem to get involved too much with Swift so if you want to learn iOS programming, you might probably want to look elsewhere. But then, it depends on what you want to do.
 

Makai

Member
So my dad just sent me a link to this coding package. Is this a good deal? I started coding in college, but dropped it because I wasn't in a good place at that time. Are these languages useful or should I look elsewhere?

https://store.idropnews.com/sales/e...AppleMac_B_SL_Sale_Giveaways&utm_medium=email
Programming language resources are free. Apple even wrote their own book for Swift.

First, figure out what you want to make - games, apps, websites, etc. You probably want to go with C#/Unity for games or HTML/CSS/Javascript for websites. Or if you want to make boring business software, you can learn Java.
 

Koren

Member
I have a quick algorithmic question...

I work with boolean matrices with p lines and q columns.

For efficiency purpose, I store those by (p*q) bits integers, row major.

So

001
110
100

is stored as 136 (001110100b)

All the operations I do are performed easily with ints (such as xoring matrices) or shifts, ands (extract int representing liines) and luts (to count the number of 1)

Except (mostly) one: extracting a column (I want 3 (011b) for first column, 2 for second, etc.)

I mean, I know how to do it, but given the collection of bit tricks I've seen, I look for a efficient (and non trivial) way to do it.


In other words, if an integer a is in binary
a_(n-1) ... a_2 a_1 a_0
I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q


Any clever idea?

Edit: pretty sure there's an x86 opcode that does this for specific p and q, but I look for a general trick.

Edit: just for information, I'm currently doing
x <<= k
x &= 0b10...010...010...01
to "push the column on the right and remove the others"
and use a dict with 2**p entries to get the result.


(I also look for the smallest integer representing a matrix that can be obtained by permuting rows and columns of a given matrix, but that's the next step ;) Edit: this one may be really tricky, actually)
 

luoapp

Member
I have a quick algorithmic question...

I work with boolean matrices with p lines and q columns.

For efficiency purpose, I store those by (p*q) bits integers, row major.

So

001
110
100

is stored as 136 (001110100b)

All the operations I do are performed easily with ints (such as xoring matrices) or shifts, ands (extract int representing liines) and luts (to count the number of 1)

Except (mostly) one: extracting a column (I want 3 (011b) for first column, 2 for second, etc.)

I mean, I know how to do it, but given the collection of bit tricks I've seen, I look for a efficient (and non trivial) way to do it.


In other words, if an integer a is in binary
a_(n-1) ... a_2 a_1 a_0
I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q


Any clever idea?

"I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q"
so here p and q are the size of the matrix? and the add operation no longer binary?
 

Koren

Member
"I want the integer which is, in binary,
a_k+(p-1)q ... a_k+2q a_k+q"
so here p and q are the size of the matrix?
Yes.

and the add operation no longer binary?
Well, I shouldn't have put + I meant the concatenation of bits.

For example, I want the 32 bit number:

xxAxxxBx xxCxxxDx xxExxxFx xxGxxxHx

where x-s are 0 or 1, transformed into the 8 bit number

ABCDEFGH


Except that I don't want to rely on a gived binary length (I'm working with 300+ bits values)


But don't bother too much, the shift / and / lut works quite well, it was just wondering whether there was a clever bit trick to do this.
 

diaspora

Member
I need a spot of help understanding what's going on with a particular piece of cpp code. I generally have an idea of how q-sorting works but:
Code:
#include <stdio.h>
#include <iostream>
using namespace std;

int compareStr(const void *val1, const void *val2)
{
   const char *v1, *v2;

   v1 = [B]*(char **)[/B]val1;
   v2 = [B]*(char **)[/B]val2;
   return strcmp(v1, v2);
}

void main()
{
   // Sorting an array of strings
   char *arrStr[] = {"hij", "klm", "abc", "opq", "defg"};
   length = sizeof(arrStr) / sizeof(arrStr[0]);
   cout << "arrStr before sorting:" << endl;
   for (i=0 ; i<length ; i++)
      cout << arrStr[i] << ", ";
   cout << endl;
   qsort(arrStr, length, sizeof(arrStr[0]), compareStr);

   cout << "arrStr after sorting:" << endl;
   for (i=0 ; i<length ; i++)
      cout << arrStr[i] << ", ";
   cout << endl;
}

Specifically: *(char **)

I'm... not entirely clear what's happening with this. Presumably it's being used to cast val1 and val2 into char pointers and assign it to v1 and v2 but I'm not clear as to how that's done with *(char **)
 

Koren

Member
Well, each element of arrStr is a pointer to an address storing a chain.

So arrStr is a char*

but qsort will provide compareStr the address of two elements of the table the function compareStr has to compare, such as

&(arrStr) and &(arrStr[j])

Those elements are pointers to an adress that contain a pointer to an address storing a chain.

So arguments of compareStr will be (char**)

But, for polymorphism purpose, those elements are casts into void*

So

(char**) val1

is just used to recast those arguments into (char**)

But you have &(arrStr) and &(arrStr[j]), and what you want to compare are arrStr and arrStr[j]

The leading * is used to get the element pointed by val1 and val2 ("remove the &")...

I hope it's understandable (and correct...!)
 

diaspora

Member
Well, each element of arrStr is a pointer to an address storing a chain.

So arrStr is a char*

but qsort will provide compareStr the address of two elements of the table the function compareStr has to compare, such as

&(arrStr) and &(arrStr[j])

Those elements are pointers to an adress that contain a pointer to an address storing a chain.

So arguments of compareStr will be (char**)

But, for polymorphism purpose, those elements are casts into void*

So

(char**) val1

is just used to recast those arguments into (char**)

But you have &(arrStr) and &(arrStr[j]), and what you want to compare are arrStr and arrStr[j]

The leading * is used to get the element pointed by val1 and val2 ("remove the &")...

I hope it's understandable (and correct...!)


So basically... qsort doesn't care about what we're comparing whether it's int, char, etc. Qsort casts the array elements into type void and passes the addresses of them into the callback and trusts that the callback does it's job as far as comparisons between the two values and returns 0, -1, or 1.
 

Koren

Member
So basically... qsort doesn't care about what we're comparing whether it's int, char, etc.
Yes... The kind of polymorphism-hack you'll find in C.

Qsort casts the array elements into type void and passes the addresses of them into the callback
It casts the address of them into void* and passes those to the callback, but yes.

and trusts that the callback does it's job as far as comparisons between the two values and returns 0, -1, or 1.
That's it. Though it can return any positive or negative value instead of 1 and -1 (so that you could return (*(int*)v2) - (*(int*)v1) to compare integers, for example)
 

diaspora

Member
Yes... The kind of polymorphism-hack you'll find in C.


It casts the address of them into void* and passes those to the callback, but yes.


That's it. Though it can return any positive or negative value instead of 1 and -1 (so that you could return (*(int*)v2) - (*(int*)v1) to compare integers, for example)

Excellent, I think I get it now- thank you! A better explanation than what I got in class.
 

peakish

Member
Is there a consensus on where to do variable declarations in C/C++? I've changed from declaring everything up-front to doing it wherever I first use the variable in my code. The handy `auto` keyword in C++ has been encouraging me a bit, although I'm definitely overusing it right now due to its freshness.

I see the advantage of this as having the type definition right next to when the variable is first needed and that you immediately get to the code instead of having to first parse a variable list. A disadvantage is that up-front declarations can help introduce the structure of the program and which variables are important to keep track of. I feel like this would be important mostly in larger functions.

A related question is whether it's good practice to introduce scoped variables inside loops if they're anyway calculated or reset every step. Right now I'm playing around with some vectors and let the destructor take care of them once they exit the scope, then create a new one.

Code:
for (int i = 0; i < L; i++) {
    double isq = i*i; // bad practice?
    vector<double> vec1 { f(i), g(i), ... }; // very convenient, but maybe bad practice? new alloc each step ...
    
    // probably pushing it
    vector<double> vec2;
    for (int j = -n; j <= n; j++)
    {
        vec2.push_back(h(j));
    }

    ...
}

I'm still just an amateur but I'd prefer to not pick up bad practices if I can avoid to.
 

Lonely1

Unconfirmed Member
Any Haskell experts around here?

I have a doubt about how Haskell Works. I have the following function:

Code:
invTranSamp ps x = geqS x ls
    where ls = cumDis ps

As it names suggest, is an inverse transformation sampling implementation (on discrete distributions) and 'cumDis' is the cumulative distribution of the distribution defined by 'ps' and 'geqS' finds (by means of a binary search) the infimum corresponding value in cumDis.

My question is: Calling cumDis is an O(n log n) operation. Are the results of calling 'cumDis ps' stored in memory and calling 'invTranSamp' is of order O(n log n) once and O(log n) afterwards, or is it always of O(n)?

The second would be pretty bad. I will be calling it millions of times and 'n' is in the hundred of thousands :S .
 

Leezard

Member
Is there a consensus on where to do variable declarations in C/C++? I've changed from declaring everything up-front to doing it wherever I first use the variable in my code. The handy `auto` keyword in C++ has been encouraging me a bit, although I'm definitely overusing it right now due to its freshness.

I see the advantage of this as having the type definition right next to when the variable is first needed and that you immediately get to the code instead of having to first parse a variable list. A disadvantage is that up-front declarations can help introduce the structure of the program and which variables are important to keep track of. I feel like this would be important mostly in larger functions.

A related question is whether it's good practice to introduce scoped variables inside loops if they're anyway calculated or reset every step. Right now I'm playing around with some vectors and let the destructor take care of them once they exit the scope, then create a new one.

Code:
for (int i = 0; i < L; i++) {
    double isq = i*i; // bad practice?
    vector<double> vec1 { f(i), g(i), ... }; // very convenient, but maybe bad practice? new alloc each step ...
    
    // probably pushing it
    vector<double> vec2;
    for (int j = -n; j <= n; j++)
    {
        vec2.push_back(h(j));
    }

    ...
}

I'm still just an amateur but I'd prefer to not pick up bad practices if I can avoid to.
I'm not familiar enough with C++ to say this for certain in this case, but generally reallocating/changing the size of vectors is really bad for performance, it's horrible. This might not matter for small applications, but it's a really bad habit.
 

peakish

Member
I'm not familiar enough with C++ to say this for certain in this case, but generally reallocating/changing the size of vectors is really bad for performance, it's horrible. This might not matter for small applications, but it's a really bad habit.
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.
 
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.

If you know the final size up front, then you can just perform a single allocation using the 'reserve' function before you start push'ing. This can also save you a bit of memory, since the vector will over-allocate in order to amortize the cost of push'ing:
http://en.cppreference.com/w/cpp/container/vector/reserve

Also, to answer your previous question: For types that are cheap to construct, I find it preferable to place the declaration in the smallest possible scope where that variable is required. Doing so naturally limits the scope in which you have to think about that variable, which (IMO) makes code easier to read*.

For types that do heap allocations, such as vectors, or are otherwise expensive to construct, it can be preferable to move these outside of loops so that the resources can be reused each loop. But generally this is only something you need to worry about in performance critical parts of your code (profile before optimizing).


* I also tend to sprinkle 'const' around liberally for similar reasons, since it makes the variables that are going to be reassigned visible at a glance and makes reasoning about the code easier (again IMO).
 
Hmm, yeah even if I was thinking mostly of small vectors it's probably best to never begin taking a shortcut like that.

Small allocations are still pretty expensive. It looks like the the vector vec1 has a statically known size (known at compile-time). In that case, you can use a std::array (which is a stack-allocated array with size known at compile-time), which has basically zero overhead.

^^ that is also very good advice.
 

peakish

Member
Ucchedav&#257;da;217407647 said:
Also, to answer your previous question: For types that are cheap to construct, I find it preferable to place the declaration in the smallest possible scope where that variable is required. Doing so naturally limits the scope in which you have to think about that variable, which (IMO) makes code easier to read*.

For types that do heap allocations, such as vectors, or are otherwise expensive to construct, it can be preferable to move these outside of loops so that the resources can be reused each loop. But generally this is only something you need to worry about in performance critical parts of your code (profile before optimizing).
This all sounds good to me although I'll stop experimenting with scoping larger objects like this. Speaking of `const`, one detail I like in Rust is that variables are const by default and have to be explicitly made mutable.

Small allocations are still pretty expensive. It looks like the the vector vec1 has a statically known size (known at compile-time). In that case, you can use a std::array (which is a stack-allocated array with size known at compile-time), which has basically zero overhead.

^^ that is also very good advice.
Point taken about the cost. While biking home I imagined some future where I had made a bad assumption about the cost of a "small" operation and finding out it's a bottleneck only after a lot of head scratching.

I do know of std::array but the above example is (mostly) a toy one. Though I got it from code which does determine the size of the data at runtime.

Thanks!
 
Any Haskell experts around here?

I have a doubt about how Haskell Works. I have the following function:

Code:
invTranSamp ps x = geqS x ls
    where ls = cumDis ps

As it names suggest, is an inverse transformation sampling implementation (on discrete distributions) and 'cumDis' is the cumulative distribution of the distribution defined by 'ps' and 'geqS' finds (by means of a binary search) the infimum corresponding value in cumDis.

My question is: Calling cumDis is an O(n log n) operation. Are the results of calling 'cumDis ps' stored in memory and calling 'invTranSamp' is of order O(n log n) once and O(log n) afterwards, or is it always of O(n)?

The second would be pretty bad. I will be calling it millions of times and 'n' is in the hundred of thousands :S .
Your fear is correct. Haskell doesn't memoize, it only thunks.

A where clause actually translates to a let form. You're creating a new data structure every time. If you want to cache the results, you'll need to refer to the same data structure ever time. I'll let you figure that one out :) if you need help, try googling "memoizing in Haskell". If you still need help, ask me in a PM!
 
Top Bottom