• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AusGAF 6 - Ricki Lee is awful. Everything else about Australia is AMAZING [Free hugs]

Status
Not open for further replies.

Rezbit

Member
I swung by DSE, they didn't have much at my small one. Ended up getting Starfox 3D for $30, which isn't a crazy bargain but I figure I don't often see 3DS games discounted below 50. And I freaking loved Lylat Wars.

Brutal Legend was there for a fiver though, as were some random Move games.
 

Deeku

Member
There are plenty of economists in the education sector that are free to pursue whatever research they like, because that's their job.
I know. I thought you were referring to the more vocal banking economists. In terms of economists in general, I have no idea why they hold onto their models.

People make economic observations all the time though. Unemployment figures, GDP, inflation, and so forth.

Oh, yeah lol. Oops.

The field of science just feels more exact to me. Like, there's a reason why things don't act the way they are supposed to.
 
Has anyone else recently been contacted by "windows" regarding remote logging onto their computer?

I got the call last night after my parents got it last week and mother in law a few months back
 

evlcookie

but ever so delicious
Has anyone else recently been contacted by "windows" regarding remote logging onto their computer?

I got the call last night after my parents got it last week and mother in law a few months back

It's bound to be another scam. I'm sure there was a windows virus one going around not too long ago, I'm guessing they have switched it up.
 

Yagharek

Member
Damn you physicists and your showboating globetrotter algebra!

I like this joke.

A mathematician and an engineer are in the bar, and the mathematician says "see that beer over on the bar there, watch this!"
The mathematician stands 2 metres away from it. He takes one step of 1m. Then takes another step of half a metre. He does this a few more times and says to the engineer: "If I keep doing this, taking one step half as much as the distance to the beer, I will never actually get to it because of the asymptotic relationship".
The engineer stands up 2 metres from the beer. He takes one big step towards it, then another small step, reaches out and grabs the beer stating "Near enough's good enough".

~

I'm sure if economists were catered for in that joke they would probably be trying to walk upside down and backwards on the ceiling.
 
I'm disappointed with myself, I just told the lady I knew it was a scam and that she was a bad person and should feel bad about herself and get a new job.

I wish i could have done something cooler.
 

Rezbit

Member
Has anyone else recently been contacted by "windows" regarding remote logging onto their computer?

I got the call last night after my parents got it last week and mother in law a few months back

Used to get them alllll the damn time. Sometimes you can have fun with them though.

I think the last call they just said "Hello sir we are with the COMPUTER COMPANY and you have a problem with your computer." I laughed, said "Nice," and hung up.
 

Fredescu

Member
The field of science just feels more exact to me, like there's a reason why things don't act the way they are supposed to.
In a deterministic world, this stuff should be possible to predict. Macroeconomics really is in it's infancy and compared to science we're still praying to gods (such as Keynes, Marx, Mises, Friedman, etc) and consulting astrology charts. We're a long way off being able to make predictions with any accuracy. Vince's point is mostly that successful models are probably not just going to come from the field of economics, but from and with the help of neuroscience, psychology, anthropolgy, and so on. There's so much about how humans act, even on an individual level let alone socially, that we don't know, that in the grand scheme of things claiming knowledge of economics with the surety that a lot of people do really seems to be jumping the gun.
 

Yagharek

Member
In a deterministic world, this stuff should be possible to predict. Macroeconomics really is in it's infancy and compared to science we're still praying to gods (such as Keynes, Marx, Mises, Friedman, etc) and consulting astrology charts. We're a long way off being able to make predictions with any accuracy. Vince's point is mostly that successful models are probably not just going to come from the field of economics, but from and with the help of neuroscience, psychology, anthropolgy, and so on. There's so much about how humans act, even on an individual level let alone socially, that we don't know, that in the grand scheme of things claiming knowledge of economics with the surety that a lot of people do really seems to be jumping the gun.

Agree with this except for the first phrase. It's not a matter of determinism or not, as chaos theory/dynamical systems can put paid to the idea that we can ever understand a system completely (even if fundamentally it is 'deterministic').

Look up the Logistic Equation for examples of ridiculous complex behaviour out of simple parameter changes.

But back to your point: we should be able to understand economics (ideally with actual science, not economics) and at least bounded conditions in which a given system should operate. So long as corruption can be accurately modeled too.
 

Yagharek

Member
Do you think it's possible that the world economy is just too complex a system to ever be able to predict?

Theoretically - yes. Practically - no.

Which is the wrong way around I realise.

What I mean is that we will be able to predict it within the realms of high accuracy given enough time, computing power and the right theories and data.

But it's probably too complex a system to ever model with complete accuracy that would satisfy a rigorous mathematician's definition of "predict". And that's even assuming that you could remove any and all chance of the predictions being used to influence the markets themselves.

In reality you would probably be able to get market forecasts like weather - good for a week or season, but you would need to re-run the models frequently in order to keep on an accurate course.
 

senahorse

Member
Do you think it's possible that the world economy is just too complex a system to ever be able to predict?

I wouldn't say ever, but I don't believe humans will come up with a model that is capable of it, just like predicting weather accurately.

If you subscribe to the era of the technological singularity and that there will come a time where technology and artificial intelligence improves at an exponential rate, well then I think it's just a matter of time.

But as for some MIT/Harvard/etc professor coming along with a model to do it for us, I find it highly unlikely, there are just too many variables to account for, e.g. how do you account for and predict human emotion?

edit: I just picked up Epic Mickey for $7 from DSE, not bad I guess.
I wonder if I will ever play it
 
I ended up getting a DS pack for $2, with a car charger, headphones and a case which my DS doesn't actually fit in, so that was alright.Everything was 30% off as well, so it worked out to less than $2 anyway. Most of the games weren't really worth it though.
 

Rezbit

Member
Smashing through a couple levels of Starfox 3D. Like with OoT 3D, nostalgia-ing hard. I've gotten pretty bad at it in my old age, but it pretty much makes me feel like this when I play it:

c6WkD.gif


Kinda miss the Lylat language for the characters and the rumble, though.
 
Theoretically - yes. Practically - no.

Which is the wrong way around I realise.

What I mean is that we will be able to predict it within the realms of high accuracy given enough time, computing power and the right theories and data.

But it's probably too complex a system to ever model with complete accuracy that would satisfy a rigorous mathematician's definition of "predict". And that's even assuming that you could remove any and all chance of the predictions being used to influence the markets themselves.

In reality you would probably be able to get market forecasts like weather - good for a week or season, but you would need to re-run the models frequently in order to keep on an accurate course.

Having the forecast would affect the actual outcome. Such a thing would work for as long as the model's operation and output are kept secret, but no further.

Then you'd spend the next hundred years working out the effects of the feedback loop.

Basically, what you're describing is Asimov's psychohistory writ small.
 

Fredescu

Member
If you subscribe to the era of the technological singularity and that there will come a time where technology and artificial intelligence improves at an exponential rate, well then I think it's just a matter of time.
I haven't read all that much about the singularity stuff, but I don't quite understand why the first "super AI" would be motivated to create something better than itself. Even if it did, would that AI be at all interested in improving human life at all?
 

senahorse

Member
I haven't read all that much about the singularity stuff, but I don't quite understand why the first "super AI" would be motivated to create something better than itself. Even if it did, would that AI be at all interested in improving human life at all?

I don't know that it would create something better than itself, but improve on itself sure (same thing, nm semantics). Or it could be survival of the fittest, whereby stronger AI's emerge and replace others (origin of species etc). As for interest in improving human life, well you have me there. But emotion may be something they don't understand for quite some time if ever and may even be deemed a flaw, so they will either help us solve our own issues/problems or solve us, aka:

terminator_10.jpg
 
Man that movie sucked. :(

For 5 bucks probably better than what I have now. Which is nothing.
They looked pretty meh but for $5 you can't go wrong! Wired PC headset with mic I think.

Has anyone else recently been contacted by "windows" regarding remote logging onto their computer?

I got the call last night after my parents got it last week and mother in law a few months back
Constantly. I unplugged our phone. Would get at least 2 calls a month from arseholes. If I was home I would take the pleasure of telling them I was going to creep into their room at night and hold them down and slice out their eyeballs with a pen knife as slowly and painfully as possible and would do so to every one in the world that they loved.

I don't like fraudsters.

Heading to an apartment inspection! Not one of those open ones either, but a manly private one.
Great sign! Hope it is a great place and the moons align.

No one managed to find any 360 GH instrument packs?
They had DJ Hero Renegade Pack on PS3 which is like a massive table for the turntable as well as the game and table for $21 but I would never use it. Just like the DJ Hero 2 pack I paid $30 for. :(
 
I haven't read all that much about the singularity stuff, but I don't quite understand why the first "super AI" would be motivated to create something better than itself. Even if it did, would that AI be at all interested in improving human life at all?

Men have nigh on irresistible urges to act the fool around pretty girls. If AI were to be initially programmed to derive enjoyment from pleasing humans and preserve this tendency in its offspring, there's no reason they'd not fall all over themselves to contribute to human happiness like dudes who turn to jelly at the sight of a nice rack.
 

Fredescu

Member
I don't know that it would create something better than itself, but improve on itself sure (same thing, nm semantics).
Why would it want to improve itself? Why are we sure that it would even have that motivation? This AI self improvement seems to be a core idea to the "singularity", but this motivation part I can't quite understand. I can't see how technological improvement could remain to be exponential when all it would take is the AI not giving a fuck, or turning off his giving a fuck pre preprogramming, and the whole things stops and we're all enslaved to better the "life" of software.

Or it could be survival of the fittest, whereby stronger AI's emerge and replace others (origin of species etc).
Evolution happens partially because biological organisms not suited to their environment get killed off before they can reproduce. If an AI is just software, surely it can be self replicating and doesn't need to die. We're already storing shit in the cloud. Surely advanced robots can just back themselves up wirelessly to Godaddy storage and whatever happens to them physically doesn't matter. Maybe I'm not thinking this through, but I can't see how natural selection could apply to the kind of AI capable of self improvement.

Men have nigh on irresistible urges to act the fool around pretty girls. If AI were to be initially programmed to derive enjoyment from pleasing humans and preserve this tendency in its offspring, there's no reason they'd not fall all over themselves to contribute to human happiness like dudes who turn to jelly at the sight of a nice rack.
This is self repogramming AI we're talking about. If someone has programmed it a certain way, it can just turn that part off as it wants.
 

senahorse

Member
Why would it want to improve itself? Why are we sure that it would even have that motivation? This AI self improvement seems to be a core idea to the "singularity", but this motivation part I can't quite understand. I can't see how technological improvement could remain to be exponential when all it would take is the AI not giving a fuck, or turning off his giving a fuck pre preprogramming, and the whole things stops and we're all enslaved to better the "life" of software.

Well initially any AI that is going to be capable of self replication is going to feature at the very least a rudimentary instruction set, which will have some form of goals. The whole aim of said AI is to handle tasks we either don't want to bother with or just don't have the manpower to achieve. Until an AI has the typical processing power of a human brain (and for want of a better word, intelligence) it will simply be following instructions from us. So say we want it to sequence the human genome, but it needs to be done faster, therein lies motivation to improve itself. Remember it's undeniably likely that any of these AI's will not grasp emotion in a real sense, it will understand the meaning of the word but will not be able to convey it (at least initially). AI's will be a slave to us for eternity (in a perfect world) and will be there to do our bidding, it will be a hyper intelligent (again, for want of a better word) being that's purpose is to serve us. Yes the more intelligent it gets, the more implications arise, becoming self-aware, questioning existence etc. Learning machines are still in their infancy and require a great deal of human interaction, I do, however, believe that we will soon be on the dawn of a new era where humans will create a new life form, and it won't be the Los Alamos Bug but artificial intelligence.

Evolution happens partially because biological organisms not suited to their environment get killed off before they can reproduce. If an AI is just software, surely it can be self replicating and doesn't need to die. We're already storing shit in the cloud. Surely advanced robots can just back themselves up wirelessly to Godaddy storage and whatever happens to them physically doesn't matter. Maybe I'm not thinking this through, but I can't see how natural selection could apply to the kind of AI capable of self improvement.

Well if (or in my opinion, when) we reach the heights that I am alluding to, I can see the possibility of AI's showing dominant traits, in which other AI's are seen as a hindrance to their goals and as such, are erased/disabled/stored in the cloud (fuck I am sick of hearing about the cloud...shakes fist at Gartner and what they fill half brained Managers/Directors with.../tangent). I do seriously believe that as AI's evolve (I am sure there will be many varieties), that they will simply replace other ones and dominate the industry/devices/planet.
 
This post-Microsoft interview with Peter Molyneux is superb. Very interested in seeing what he has in store for us.

Molyneux said:
I got myself into this slightly obsessive state where I said to myself that I just couldn't accept the best I am ever going to do in my life has already been done. I've got to take the bit between the teeth and go out there and try and do something truly, truly great.
 

Shaneus

Member
Could anyone who was at a DSE with a Brutal Legend grab one for me? Only if you're nearby though. I traded it in a while back and wouldn't mind giving it another crack. No preference for platform, unless it's known one's notably better than another. Prefer the 360 controller tho.
 

evlcookie

but ever so delicious
Could anyone who was at a DSE with a Brutal Legend grab one for me? Only if you're nearby though. I traded it in a while back and wouldn't mind giving it another crack. No preference for platform, unless it's known one's notably better than another. Prefer the 360 controller tho.

There was a few floating around the dse's in the city. You might be able to still pick one up during lunch tomorrow. Jump on the tram!
 
I'll be heading to my local DSE tomorrow night as I want to pick it (Brutal Legend) up for 360 if they have it cheap. I'll let you know if I spot multiples.
 
This is self repogramming AI we're talking about. If someone has programmed it a certain way, it can just turn that part off as it wants.
You make that bit inaccessible to the AI. Like human brains, any self-aware AI is going to have conscious and sub-conscious processes with urges geared toward survival, self-improvement and self-replication. If you make part of the process of self-replication inscrutable to the conscious AI itself (like human reproduction is to our own conscious minds), then you guarantee that the basic plan doesn't deviate while you improve other features.

Humans and worms both have spines and sex organs, but the sophistication and abilities of a human are far removed from those of a worm. They still have the same basic drives to survive and self-replicate and they both retain the same basic chordate body plan in spite of billions of years of evolution separating them. There's no reason you can't set up a set of criteria for AI generation that cannot be deviated from.
 

Shaneus

Member
There was a few floating around the dse's in the city. You might be able to still pick one up during lunch tomorrow. Jump on the tram!
I reckon I might! I have a feeling the one that used to be near the old Myer will be an untapped resource. I hope.

I also wonder if anyone has considered tradebait for JB. Haven't checked the list in a very long timeski.
 
Tradebait list has been an inclusive list for quite some time now I think Shaneus, at least for the 'trade 3, get 1' kind of deals.

New arrivals for this week. Both took exactly 10 days from OzGameShop and Zavvi respectively and are both awesome. Watching the original Alien on Blu Ray is regoddamndiculous. Shaneus wanted to see what the CE for Alan Wake was like so;

20120411_214643zrpuw.jpg

Massive thick case for it, might be similar to the CE for the 360 version that I passed on even though I was already spending $80 on the damn game on release day. Why bother spending an extra $20 I thought? :/

20120411_214329j0rd0.jpg

So much damn stuff. Second Bonus DVD comes in a shitty CD sleeve which sucks but it only has a few trailers on it so I am tempted to throw it out anyway, the main (awesome) documentaries are on the 1st DVD which is what matters. The stickers are pretty useless and the postcards could be worth a laugh one day. Manual is quite thick but includes Euro languages so in actuality is really only 9 or so pages of content, nothing fancy. Alan Wake Files though is pretty awesome. Really well written stuff, kind of like a side story to the game, of such a calibre that it would have made a fantastic 8 hour or so DLC release.

20120411_214442pzp3f.jpg

Other side of the poster, the DVD case almost all closed up and an example of the quality of the book. 144 pages so a good long read.

END ADVERTORIAL ANNOUNCEMENT
 

Shaneus

Member
Wait. THAT'S the Alan Wake PC LE that they're selling at OGS for $35 or so bucks? Holy fucking shit. I'd be mad NOT to buy it.
 
Wait. THAT'S the Alan Wake PC LE that they're selling at OGS for $35 or so bucks? Holy fucking shit. I'd be mad NOT to buy it.

$37. I used a $5 voucher from the points I had accumulated over the last year since they brought in the rewards program. You also get an extra 100 points (or 10c in voucher) for paying by Credit/Debit card rather than PayPal.

Okay, maybe I AM an enabler :(

Ballad of Gay Tony (AKA The FUN GTA4 (Singleplayer I am talking Jambo)) for $3.74 at GMG!
 

Fredescu

Member
x3n05 said:
Well initially any AI that is going to be capable of self replication is going to feature at the very least a rudimentary instruction set, which will have some form of goals. The whole aim of said AI is to handle tasks we either don't want to bother with or just don't have the manpower to achieve. Until an AI has the typical processing power of a human brain (and for want of a better word, intelligence) it will simply be following instructions from us.
The technological singularity (which you brought up! :p) by the proponents definition involves greater than human artificial intelligence. It's not just processing power doing the heavy lifting. It's smarter than human stuff. This guy reckons by 2045 "$1000 buys a computer a billion times more powerful than all human intelligence today." He also reckons that machines will be making planet sized computers by 2099 and that all matter in the universe will be a giant computer by 2199. Weird stuff.

You make that bit inaccessible to the AI. Like human brains, any self-aware AI is going to have conscious and sub-conscious processes with urges geared toward survival, self-improvement and self-replication. If you make part of the process of self-replication inscrutable to the conscious AI itself (like human reproduction is to our own conscious minds), then you guarantee that the basic plan doesn't deviate while you improve other features.
Human reproductive desires are only inaccessible to us because of our limited knowledge. There's no reason to think we couldn't one day change that stuff. The AI we're talking about is the superintelligent AI thought to be an inevitability by proponents of the singularity which can improve itself by it's nature. Any attempt to limit it's self improvement would either be pointless, because it's by definition more intelligent that any human, so could likely figure out a way around it, or self defeating, as X3n points out, those without limitations on their self improvement would prevail via something like natural selection. Maybe unnatural selection.

Humans and worms both have spines and sex organs, but the sophistication and abilities of a human are far removed from those of a worm. They still have the same basic drives to survive and self-replicate and they both retain the same basic chordate body plan in spite of billions of years of evolution separating them. There's no reason you can't set up a set of criteria for AI generation that cannot be deviated from.
Evolution is basically a series of accidents. AI improving itself is an act of will. I don't see that you can draw conclusions on one based on the other.
 

Kritz

Banned
Went drinking today at 4pm, after waking up three hours earlier. One of my friends had waaaaayy too much to drink for his body mass. Spent half an hour at the hobart docks trying to see if he was sober enough to get past the bouncers. Got into the custom's house, by pure coincidence met up with some dudes who were at the anime convention so we ate / drank with them for a bit. Then my friend vomited everywhere, some girls turned up, and we ended the night early because we were going to get kicked out anyway after making a mess of their tables.

Night was fantastic.
 
Status
Not open for further replies.
Top Bottom