• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Programming |OT| C is better than C++! No, C++ is better than C

Somnid

Member
You guys are killing me.

Very dumb question, but a quick search didn't give me a satisfactory answer:
Is there a good reason why server- and client-side scripting is done in different languages?

I guess stuff like node.js (from what I understand) is blurring it, but why not have one language for both purposes from the start?

It started with no scripting, just HTML which were just text documents with hyperlinks. Over time people started to make more elaborate documents in terms of presentation and this lead to people wanting to do more with them and adding new features. One of the things they decided to add were some light scripting features. Back in these days things were not very standard, Javascript was famously created in 10 days by some developers at Netscape. Eventually they brought it to the ECMA to create a standard so pages could be more interoperable between browsers. MS was not really on board with this and was uninterested in client-side scripting, choosing to push the server-side method of updating content via forms which had existed long before (ASP, PHP, CGI to interface with other popular languages of the time, etc). Eventually they caved and caused a bit of a schism where separate javascript standards going at the same time and eventually this settled with ECMAscript 5.

So to more succinctly answer, it was more a question about whether to have scripting or not. Javascript was created to add scripting to HTML documents without the use of heavy and lower-level languages like Java and C. Note that for a long time Java applets were a thing to get interactivity in applications, and MS did push for VBScript as a scripting language as well but it didn't win out.
 
You guys are killing me.

Very dumb question, but a quick search didn't give me a satisfactory answer:
Is there a good reason why server- and client-side scripting is done in different languages?

I guess stuff like node.js (from what I understand) is blurring it, but why not have one language for both purposes from the start?

Client side is less flexible when it comes to using different languages, it would have to be supported by the browser or the user would need to install a plug-in and the language maintainers would have to make a plug-in for every single commonly used browser. So because of this Javascript is just more convenient. Server side doesn't have this kind of problem because it is up to the creators of the website to install the necessary files to support a language. Until webassembly is ready for mainstream use, we are stuck with javascript.
 
So I'm going to start applying for CS internships. I only have about 4 completed CS courses under my belt (consider my a freshman I suppose... or perhaps someone just starting their sophmore term), and am just about to complete 2 more.

Intro CS I, Intro CS II, Discrete Structures in CS, Usability Engineering (joke course), Computer Architecture, Data Structures.

I have a few class projects that maybe sort of I can put on a resume.

Does anyone know what a freshman/sophmore resume should look like for CS internships? And then would anyone be willing to look over mine sometime next week?
 
Until webassembly is ready for mainstream use, we are stuck with javascript.

This is wrong. I will illustrate why it is wrong in the typical way.

Why is webassembly going to free us from JavaScript on the client-side?

Because webassembly is a fast, near-native speed executable encoding that is safe for the web.

But what does that have to do with JavaScript, and how does it free us?

Well, people will be able to ship WebAssembly binaries and their browser will be compatible.

That sounds useful. How do I make a WebAssembly binary?

Simple. You write in the programming language of your choice, run it through a compiler, and a web-asm binary pops out.

So the bridge is the compiler, yes? By compiling to a uniform format, many languages can be treated by the same endpoint.

Yes, that is the idea.

Then why not compile to JavaScript?

Because it is slower.

But won't JavaScript be faster because it can be compiled to Web Assembly?

This is true.

And won't JavaScript be easier to compile to because it is a high level language?

This is also true. So then why don't people already compile to JavaScript?

In fact, they do. There are many languages that do this already:
- Elixir
- Clojurescript
- Fable
- Purescript
- Typescript
- GHC2JS

Etc. So what is the point of WebAssembly, then?

I suppose it is to make JavaScript faster, and has nothing to do with inter-language compatibility.
 

Somnid

Member
This is wrong. I will illustrate why it is wrong in the typical way.

Why is webassembly going to free us from JavaScript on the client-side?

Because webassembly is a fast, near-native speed executable encoding that is safe for the web.

But what does that have to do with JavaScript, and how does it free us?

Well, people will be able to ship WebAssembly binaries and their browser will be compatible.

That sounds useful. How do I make a WebAssembly binary?

Simple. You write in the programming language of your choice, run it through a compiler, and a web-asm binary pops out.

So the bridge is the compiler, yes? By compiling to a uniform format, many languages can be treated by the same endpoint.

Yes, that is the idea.

Then why not compile to JavaScript?

Because it is slower.

But won't JavaScript be faster because it can be compiled to Web Assembly?

This is true.

And won't JavaScript be easier to compile to because it is a high level language?

This is also true. So then why don't people already compile to JavaScript?

In fact, they do. There are many languages that do this already:
- Elixir
- Clojurescript
- Fable
- Purescript
- Typescript
- GHC2JS

Etc. So what is the point of WebAssembly, then?

I suppose it is to make JavaScript faster, and has nothing to do with inter-language compatibility.

This is incorrect. Web Assembly (WASM) is mainly trying to improve the speed of parsing and transmission by providing a binary format that's quick to parse and a little less esoteric than ASM.js. Under the hood it works just like ASM.js which itself is just a series of optimizations built into existing JS engines due to typing guarantees and exists in all non-Webkit browsers today (and even they are implementing it). WASM will do nothing for JS that doesn't already exist, and the real use case for ASM.js is to allow companies to port C++ code to the web with minimal rewrite (other languages are able to leverage it but it's not the primary case). JS has a different track, since the JS engines do have type optimizations but cannot be applied directly on JS code due to things like sparse arrays and other runtime type changes these will be addressed via Strong Mode (also referred to as Sound Mode) which is an opt-in mode like Strict Mode but removes some of the looser modification rules to get better compiler optimization. Beyond that they will move into Typed Javascript which currently is planned to be built on top of Typescript.
 

Kalnos

Banned
Does anyone know what a freshman/sophmore resume should look like for CS internships? And then would anyone be willing to look over mine sometime next week?

Definitely throw projects on there even if they're just coursework. Describe what you did, why, and how. Try to host the code on GitHub or somewhere else if you can. Companies understand that you're a freshman/sophomore and what to expect. r/cscareerquestions is a decent resource for this sort of thing, especially if you want a lot of anonymous people's opinions.
 

Holundrian

Unconfirmed Member
Dunno if this is the right thread to ask for but anyone have a recommendation on a good book that could give me enough fundamental understanding of graphics programming so I can build on that and dive in deeper where I need to?

Like I've got a good book that covers the math on that front but I feel like I want something that covers more the programming front on the topic(like important concepts) maybe even going into the common APIs and what I should know about them.

Big question probably but since I'm a noob on the topic right now I don't know how to ask this better, I'm not even sure if what I'm seeking is smart and just sticking to the math side is enough.
 

Somnid

Member
Github down, productivity gone.

I always wondered why GIT never had a feature to pull updates from local repositories on the network first before trying to pull from the remote. In either case if you have a coworker with an updated copy you add them as a remote.
 

poweld

Member
I always wondered why GIT never had a feature to pull updates from local repositories on the network first before trying to pull from the remote. In either case if you have a coworker with an updated copy you add them as a remote.

You can do that. You'd just have to have your coworker run the git daemon on their machine and add them as a remote on your local repository.
 

Two Words

Member
Bout to start my summer semester of Organization of Programming Languages. It's gonna be my first time doing anything in a functional programming language. Any advice?
 

poweld

Member
Bout to start my summer semester of Organization of Programming Languages. It's gonna be my first time doing anything in a functional programming language. Any advice?

Keep an open mind. It's going to be pretty different from what you're used to. If you keep working with it, it will eventually click.

And then your pupils dilate and you don't stop talking about functional programming until your friends can't stand you.
 

Somnid

Member
You can do that. You'd just have to have your coworker run the git daemon on their machine and add them as a remote on your local repository.

Right, that's what I said, but I was thinking more of an ability for it to pull from several sources in a less centralized fashion, kinda like Bittorrent.
 
Right, that's what I said, but I was thinking more of an ability for it to pull from several sources in a less centralized fashion, kinda like Bittorrent.

Well, there are things like gittorrent (https://github.com/cjb/GitTorrent) that can do that.

You can do that. You'd just have to have your coworker run the git daemon on their machine and add them as a remote on your local repository.

Or just clone the repos directly over some other protocol if you are on the same network with your co-worker.
 

poweld

Member
Right, that's what I said, but I was thinking more of an ability for it to pull from several sources in a less centralized fashion, kinda like Bittorrent.

Like having multiple remotes? Or are you talking about having some kind of service that will match you up with remotes automagically?
 

Two Words

Member
Keep an open mind. It's going to be pretty different from what you're used to. If you keep working with it, it will eventually click.

And then your pupils dilate and you don't stop talking about functional programming until your friends can't stand you.

lol I kinda know somebody who is like that.
 

Mr.Mike

Member
Not really the sort of caching or redundancy stuff you guys seem to want, but in the vein of P2P version control, I'm wondering if you could set up a connection between coworkers to share information about changes being made as they are being made. The hope then would be that merge conflicts could be detected before you try to merge, so you'd resolve them upfront instead of later. It would be n-1 to the power n connections, so you'd have to break it down into small groups, else you'd run into limitations (technical, but also human limitations way before that).
 

Somnid

Member
Not really the sort of caching or redundancy stuff you guys seem to want, but in the vein of P2P version control, I'm wondering if you could set up a connection between coworkers to share information about changes being made as they are being made. The hope then would be that merge conflicts could be detected before you try to merge, so you'd resolve them upfront instead of later. It would be n-1 to the power n connections, so you'd have to break it down into small groups, else you'd run into limitations (technical, but also human limitations way before that).

Not that I'm aware but I'm also curious. One of my ideas was a code editor that works like Google Docs. You would see other people adding code in real time and it would essentially always be merging in real time or close to it with the idea that the smallest possible merge is always best. Of course as a pure concept this wouldn't actually work in real cases because you'd get weird half-merges and incomplete features that would break things but perhaps you could play around with when merges are considered "safe" and non-breaking or allow users to manually merge blocks as they come in and show them in real-time but ignore them in terms of compilation. At the very least you'd know when two people are in the same area.
 

Jokab

Member
Not that I'm aware but I'm also curious. One of my ideas was a code editor that works like Google Docs. You would see other people adding code in real time and it would essentially always be merging in real time or close to it with the idea that the smallest possible merge is always best. Of course as a pure concept this wouldn't actually work in real cases because you'd get weird half-merges and incomplete features that would break things but perhaps you could play around with when merges are considered "safe" and non-breaking or allow users to manually merge blocks as they come in and show them in real-time but ignore them in terms of compilation. At the very least you'd know when two people are in the same area.

I think the Saros plugin for Eclipse does real-time collaborative code editing. We had to do it for a school lab, worked fine. Only two people though.
 

Mr.Mike

Member
Not that I'm aware but I'm also curious. One of my ideas was a code editor that works like Google Docs. You would see other people adding code in real time and it would essentially always be merging in real time or close to it with the idea that the smallest possible merge is always best. Of course as a pure concept this wouldn't actually work in real cases because you'd get weird half-merges and incomplete features that would break things but perhaps you could play around with when merges are considered "safe" and non-breaking or allow users to manually merge blocks as they come in and show them in real-time but ignore them in terms of compilation. At the very least you'd know when two people are in the same area.

You could perhaps only show what other people are doing in real time, and every time some changes have been made and tests run successfully the changes are automatically pushed to everyone else. And you'd encourage people to do things in small steps, which is probably good practice anyway.

And/or maybe you could take a page out of multi threading and let people "lock" files (or maybe some smaller unit of code, functions/classes) so only they can edit them.
 

MRORANGE

Member
can somone explain in python ths?

Code:
for i in range (1,11):
    print("I can count to ",i)

Why is it only up to 10 and not 11?


I can count to 1
I can count to 2
I can count to 3
I can count to 4
I can count to 5
I can count to 6
I can count to 7
I can count to 8
I can count to 9
I can count to 10
 

Somnid

Member
You could perhaps only show what other people are doing in real time, and every time some changes have been made and tests run successfully the changes are automatically pushed to everyone else. And you'd encourage people to do things in small steps, which is probably good practice anyway.

And/or maybe you could take a page out of multi threading and let people "lock" files (or maybe some smaller unit of code, functions/classes) so only they can edit them.

This was sorta what I was thinking though it's hard to even imagine the workflow without a prototype or a similar piece of software, the idea itself might not be workable, but it sounded like it had promise.

I'd also like to see semantically aware version control that can merge changes based on the actual code structure rather than code lines.
 
can somone explain in python ths?

Code:
for i in range (1,11):
    print("I can count to ",i)

Why is it only up to 10 and not 11?
So you subtract the end minus the beginning and easily get the number of elements. 11 - 1 = 10. Not only that, but if you start indexing at 0, it gets even easier.


for i in range(0, 10):
print(i)

And now take the two points into consideration while iterating over an array.

for i in range(0, len(array)):
print(array)

No need to do a -1.
 
While all of the above about python ranges are true, there's another more fundamental reason why (1,11) does not include the number 11. Suppose it did. How would you write the loop? Here's a couple of ways:

Code:
i = 0
while i <= 11:
    print("I can count to ", i)
    i = i + 1

i = 0
while i < 12:
    print("I can count to ", i)
    i = i + 1

i = 0
while i != 12:
    print("I can count to ", i)
    i = i + 1

Any thoughts on which one is the "best"? On the one hand, the first one is the most readable. You don't have to do some mental subtraction by 1, and the bounds of the range are just "there". But there's a HUGE drawback. It requires a < operator. So great, you can use it for numbers, but what about arbitrary objects? Not every set of data can be ordered.

For that reason, the 3rd approach is the best. It is the most generic because it only requires you to be able to check for equality. Now, sure, you could do something like this:

Code:
i = 0
last = False
while not Last:
    if i == 11:
        last = True
    print("I can count to ", i)
    i = i + 1

But wow, that's some shitty code, right? So the "best" and most efficient way to write this loop is to loop until you are one past the last element.


In fact, this is the way C++ does it too. For example, if you want to iterate over an STL vector, you would write this:

Code:
std::vector<int> items;
for (auto b = items.begin(); b != items.end(); ++i)
    cout << "I can count to " << i << endl;

Here, items.end(), if you tried to dereference it, you'd get an error, because it's not one of the items of the sequence. Using proper mathematical notation, valid range of the sequence is the half open interval [v.begin(), v.end())

Iterating over half open intervals as opposed to closed intervals is so convenient it's almost universally adopted in modern languages.
 

Koren

Member
While all of the above about python ranges are true, there's another more fundamental reason why (1,11) does not include the number 11. Suppose it did. How would you write the loop? Here's a couple of ways:

Code:
i = 0
while i <= 11:
    print("I can count to ", i)
    i = i + 1

i = 0
while i < 12:
    print("I can count to ", i)
    i = i + 1

i = 0
while i != 12:
    print("I can count to ", i)
    i = i + 1

Any thoughts on which one is the "best"? On the one hand, the first one is the most readable. You don't have to do some mental subtraction by 1, and the bounds of the range are just "there". But there's a HUGE drawback. It requires a < operator. So great, you can use it for numbers, but what about arbitrary objects? Not every set of data can be ordered.

For that reason, the 3rd approach is the best. It is the most generic because it only requires you to be able to check for equality. Now, sure, you could do something like this:
[...]

That's interesting, but I don't completely buy it...

* You can write the loop without the < and with 11 :
Code:
i = 0
repeat 
    print("I can count to ", i)
    i = i + 1
until i = 11
or
Code:
i = 0
while true :
    print("I can count to ", i)
    i = i + 1
    if i == 11 : break

Granted, the first one is neither Python nor C, and the second less common/nice, but when you're writing the compiler/interpreter, I don't think you'll make decisions on such small details.

* The range USE the < operator since range(11,1) is empty and not one that would give an infinite loop (so your two first examples are the only ones that match Python behavior there, even if I'm sure that's the third that is used in the loop (and first < last only checked once at start)

* indeed, < is not available for all types, but loops in Python stop when StopIteration is raised... range is an integer-only construct, they could have designed it to raise StopIteration after yielding 11.

I think that
Iterating over half open intervals as opposed to closed intervals is so convenient it's almost universally adopted in modern languages.
is the main reason. It's usually done that way, and done that way because range(10) gives you 10 iterations since you begin at 0.


Having to deal with students that learn, at the same time:
- Python, where lists (tables in fact) begin at 0 and where range(1,5) or 1:5 slice is 1, 2, 3, 4
- Scilab, where vectors begin at 1 and where 1:5 is 1, 2, 3, 4, 5
- Caml, where vectors begin at 0 and where for i=1 to 5 is 1, 2, 3, 4, 5
I would have liked them to settle on a standard more often. There have even been mistakes in loops in national exams.
 

Koren

Member
In any case, the main reason is this note from 1982.
Nice read with which I agree (but that's basically "range(10) should have 10 elements, and counts should begin at 0"), and great conclusion ^_^

Although I'm not sure that tables that begin at 0 is really a philosophical choice in C... in such a low-level language, table indexes are just pointer arithmetics (a just translate into *(a+b)) so they would probably never have settled on tables that start indexing at 1... or they would have wasted memory (not a chance), or had tables whose address isn't the start of the allocated memory...
 

Koren

Member
In PHP range(1, 11) includes 11 and that's what I'd expect to happen.
It's a matter of choice/habit, and I think having 1..10 is more natural. I've already done mistakes in PHP because of it.


Note that in Python, it's not always logical:
random.randint(1, 11) gives a random integer in 1, 2, ..., 10, 11

AFAIK, some people complained, so they added
random.randrange(1, 11) that gives a random integer in 1, 2, ..., 9, 10

But things become nasty when people from numpy started doing things their way (and it's worsening) and decided that
numpy.random.randint(1, 11) gives a random integer in 1, 2, ..., 9, 10

Now, you have to fish for the import to know how randint behave... :/
 
Nice read with which I agree (but that's basically "range(10) should have 10 elements, and counts should begin at 0"), and great conclusion ^_^

Although I'm not sure that tables that begin at 0 is really a philosophical choice in C... in such a low-level language, table indexes are just pointer arithmetics (a just translate into *(a+b)) so they would probably never have settled on tables that start indexing at 1... or they would have wasted memory (not a chance), or had tables whose address isn't the start of the allocated memory...
Making a choice to have array access match the semantics of pointer arithmetic was definitely a philosophical choice, and probably the most important choice it ever made, i.e. that the programmer is programming against a machine with weakly coercible abstractions. And it was not the first time this choice had to be made.

You see, ALGOL, Fortran, and BCPL (from which B derived, from which C derived) had for loops which were completely exclusive. From Wikipedia, a matrix algorithm in ALGOL 60:

Code:
procedure Absmax(a) Size:(n, m) Result:(y) Subscripts:(i, k);
    value n, m; array a; integer n, m, i, k; real y;
comment The absolute greatest element of the matrix a, of size n by m,
    is transferred to y, and the subscripts of this element to i and k;
begin
    integer p, q;
    y := 0; i := k := 1;
    for p := 1 step 1 until n do
        for q := 1 step 1 until m do
            if abs(a[p, q]) > y then
                begin y := abs(a[p, q]);
                    i := p; k := q
                end
end Absmax
So in fact the most obvious choice would have been to do what Backus and co were doing. What literally everyone else was doing. So somewhere between Algol 60 and the invention of C, someone had to make a choice of closed-open versus closed. And also, mysteriously we haven't seen where starting indices from 0 started. Was it really because of pointer arithmetic? That's probably why it stuck, but it turns out that it might have started because of something else...

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.
So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.
The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.
So I found that person and asked him.
His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…
As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. [...] Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. I can see no sensible reason why the first element of a BCPL array should have subscript one.
“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.
BCPL was first compiled on an IBM 7094 – here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.
You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.
Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.
Does it get better? Oh, it gets better:
IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.
Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.
So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.
There are a few points I want to make here.
The first thing is that as far as I can tell nobody has ever actually looked this up.
Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.
 

Koren

Member
Making a choice to have array access match the semantics of pointer arithmetic was definitely a philosophical choice
In a sense, since it's compiled, yes... but that depends what you call philosophical. I'll try to explain later.

Was it really because of pointer arithmetic? That's probably why it stuck, but it turns out that it might have started because of something else...
Interesting read, and I have still to read it all, but I wasn't talking about C pointer arithmetics.

I was talking about the fact that, in processors, you could very early access a memory using two registers that were added (and possibly a x2 or x4 multiplication involved like in x86). Thus it was making sense to put the address of the "table" in one and the index in the second one.

If you put the beginning of the data area in the first register, the second must hold 0 and not 1 so that the designated memory address is the first data of the chunk. So I still think in hardware, arrays are more logically 0-indexed, and that's even before you even create programming languages.
 
I was talking about the fact that, in processors, you could very early access a memory using two registers that were added (and possibly a x2 or x4 multiplication involved like in x86). Thus it was making sense to put the address of the "table" in one and the index in the second one.
That's probably true, but what the index actually is and what the instructions are never has to be the same. So as compile time, you could simply have whatever you want and the compiler would auto-decrement the constant in your array accesses anyway. Just as the compiler already does a million things for you.

If you put the beginning of the data area in the first register, the second must hold 0 and not 1 so that the designated memory address is the first data of the chunk. So I still think in hardware, arrays are more logically 0-indexed, and that's even before you even create programming languages.
Like I said, the third option is for the array indices and the emitted instructions to simply be different.

Also, C is not a low level language. It was meant to be a high level one! And it was so good at what it did that it became the new low level. ALGOL, Fortran, et al's goals were not to be fast but to be sane. Besides Lisp, I cannot tell you of another language around the time of C that was actually any slower than C. (And C was relatively slow for a language for some time).

So what this all means is that if C is actually a high level language, you need to make a decision about indexing. And what I posted might have been a good reason for BCPL, which was otherwise quite simple to compile.
 

Kieli

Member
Hey, suppose I declare and initialize a variable/pointer in a recursive function, will the value be declared anew each time we recurse?

E.g.

int someRecursiveFunction(int num) {
int fixedValue = 9001 + num;
int acc = 0;
if (num>0) {
acc++;
someRecursiveFunction(num--);
} else {
return acc;
}

Naturally, in this function, when I call someRecursiveFunction(9), I want fixedValue to remain 9010 for the duration of the function call. However, I want acc to update with each recursive call rather than resetting to 0.

Note that fixedValue and acc must be in local scope (so I can't just declare global variables).
 
Hey, suppose I declare and initialize a variable/pointer in a recursive function, will the value be declared anew each time we recurse?

E.g.

int someRecursiveFunction(int num) {
int fixedValue = 9001 + num;
int acc = 0;
if (num>0) {
acc++;
someRecursiveFunction(num--);
} else {
return acc;
}

Naturally, in this function, when I call someRecursiveFunction(9), I want fixedValue to remain 9010 for the duration of the function call. However, I want acc to update with each recursive call rather than resetting to 0.

Note that fixedValue and acc must be in local scope (so I can't just declare global variables).

Pass acc as an argument.

Code:
int someRecursiveFunction(int num, int acc=0) {
          int fixedValue = 9001 + num;
          if (num>0) {
                    return someRecursiveFunction(num--, acc+1);
          } else {
                    return acc;
}

I assume this is a hypothetical example, because fixedValue is not actually used for anything.
 

Kieli

Member
Pass acc as an argument.

Code:
int someRecursiveFunction(int num, int acc=0) {
          int fixedValue = 9001 + num;
          if (num>0) {
                    return someRecursiveFunction(num--, acc+1);
          } else {
                    return acc;
}

I assume this is a hypothetical example, because fixedValue is not actually used for anything.

Yeah, this is a hypothetical example. Also, I can't change the arguments for the function that I'm trying to implement. So passing an accumulator in the recursive function is a no-go. :(
 
Yeah, this is a hypothetical example. Also, I can't change the arguments for the function that I'm trying to implement. So passing an accumulator in the recursive function is a no-go. :(

Make a new function.

Code:
int recursiveHelper(int num, int acc) {
          int fixedValue = 9001 + num;
          if (num>0) {
                    return recursiveHelper(num--, acc+1);
          } else {
                    return acc;
}

int somRecursiveFunction(int num) {
    return recursiveHelper(num, 0);
}
 

Two Words

Member
I'm taking linear algebra during the summer. I held it off a bit since it is a dead end in my degree plan. Some people have told me that linear algebra helps a lot with CS. How do you guys feel about that?
 
Yeah, this is a hypothetical example. Also, I can't change the arguments for the function that I'm trying to implement. So passing an accumulator in the recursive function is a no-go. :(
You want the helper function pattern that's common in functional languages. The idea is to not assume the initial value of the accumulator and define a recursive relationship between successive values. Then you initialize it with the original function.

Code:
int someRecursiveFunction(int num) {
    return someRecursiveFunctionHelper(num, 0);
}
int someRecursiveFunctionHelper(int num, int acc) {
          int fixedValue = 9001 + num;
          if (num>0) {
                    return someRecursiveFunction(num--, acc+1);
          } else {
                    return acc;
}

As I am a local ML proponent, here is the same code in f#.

Code:
let rec someRecursiveHelper n acc =
    if n > 0 then acc
    else someRecursiveHelper (n-1) (acc+1)

let someRecursiveFunction n =
    someRecursiveHelper n 0
I'm taking linear algebra during the summer. I held it off a bit since it is a dead end in my degree plan. Some people have told me that linear algebra helps a lot with CS. How do you guys feel about that?
It doesn't help at all unless you're programming something that requires linear algebra. But it turns out that modeling many things does require matrices, yes.

And it'll help you in the roundabout sense that abstract math like linear algebra better prepares your mind for CS problems.
 

upandaway

Member
I'm taking linear algebra during the summer. I held it off a bit since it is a dead end in my degree plan. Some people have told me that linear algebra helps a lot with CS. How do you guys feel about that?
I don't think it's relevant to everyday coding but for more theoretical areas (like algorithms or simulations) it's in the basis of everything, so it depends where you're going
 
Top Bottom