• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The Source Code (MUST READ)

Status
Not open for further replies.

iapetus

Scary Euro Man
border said:
Could this ever work, in any form? Maybe not movies as a few kilobytes and source files as a few hundred megabytes....but maybe 200MB movies and 1 TB source files?

Short answer: No.

Long answer: Um... no.
 

Monk

Banned
Compression only comes at the cost of variation. Thats just how compression works, the patterns are what gets compressed, all that is outside the compression patterns has to be sored without compression.

In theory you could even make a doom clome with bump mapping in 64k

Here is one at 96k

http://www.theprodukkt.com/kkrieger.html
 
The question why industry heads would get behind this was raised, and the answer's simple: when supposedly new technology comes around, you get behind it as quickly as possible. These industry heads had a tech demo, a crackpot explanation to work off of, and nothing more. By investing a relatively small amount on an unproven, yet potentially revolution idea, they're safeguarding their own interests. If the idea fails, whoops. If it does, you're in the money. If a revolutionary product presents itself, it's a good idea to be in on it--even if it fails, you aren't much worse off.
 

Dilbert

Member
silver said:
But everyone, tell me, HOW do people like Pieper, Wang and Philips and CA scientists fall for a crackpot trick?
Have you ever heard of the "dot com" era? Supposedly smart people fell for all kinds of shit, and they even threw money away on stupid ideas!

Compression is the result of mapping the contents of one set of data to the contents of another, smaller set of data. As others have eloquently pointed out, the level of compression being claimed is utterly absurd.

(By the way, your little "GAYMING AGE" crack was noted and not appreciated.)
 

Dsal

it's going to come out of you and it's going to taste so good
As Wolfram pointed out in his unnecessarily large and mostly boring book "A New Kind of Science", things of very high complexity can be described by small rulesets (like 4 logical operations) given small inputs (as small as 2 bits)... and enough free memory to output to of course.

Perhaps this guy figured out some way to find the right rulesets and input bits to break down large blocks of bits into self contained procedural cellular automata generators that could be described in a few bits.

Of course, we all know how incredibly hard it would be to do that. That's like solving NP in P time. Chances are it was all bunk. But theoretically, if someone could "reverse procedurally generate" things, you could get crazy crazy compression ratios.
 

silver

Banned
You know, you may all think I'm retarded, but seriously: if somebody discovers a totally new way of compressing data that no one knows about, of course you think "impossible, retarded". Because if you didn't, you'd be the one working on that code.

I could tell you how it works from what I've read, but I doubt I could explain it well.

The question why industry heads would get behind this was raised, and the answer's simple: when supposedly new technology comes around, you get behind it as quickly as possible. These industry heads had a tech demo, a crackpot explanation to work off of, and nothing more. By investing a relatively small amount on an unproven, yet potentially revolution idea, they're safeguarding their own interests. If the idea fails, whoops. If it does, you're in the money. If a revolutionary product presents itself, it's a good idea to be in on it--even if it fails, you aren't much worse off.

Right. One thing I didn't tell you is that Pieper quit (his multi-million dollar a year position) at Philips to head up Fifth Force.

Compression is the result of mapping the contents of one set of data to the contents of another, smaller set of data. As others have eloquently pointed out, the level of compression being claimed is utterly absurd.

Point taken. Now read up on how Sloot's technology worked, then come back to criticize.


For the record: Pieper gained complete control of Fifth Force, Inc. and the company still exists today, because the real code still hasn't been found. Sloot claimed that it was somewhere in a vault, but no one knows. Sloot was also a heart patient. He was not murdered. He was under pressure, had just travelled through the US with Pieper and died at a bad time. Still, if his story was bullshit... talk about going out with a bang.
 

Hitokage

Setec Astronomer
As Wolfram pointed out in his unnecessarily large and mostly boring book "A New Kind of Science", things of very high complexity can be described by small rulesets (like 4 logical operations) given small inputs (as small as 2 bits)... and enough free memory to output to of course.
Comments about Wolfram's unoriginality and hyperbole aside, even if you did find such a construct for a desired pattern, you'd still have to define what output was useful and what is to be discarded.
 

xabre

Banned
If anyone can figure this shit out then good on you -

http://www.endlesscompression.com/

Jan Sloot's principle looks like that of Klaus Holtz with the different that Sloot made a fixed static reference memory with all the unique data already in, while Holtz made it dynamic as a self learning system, also was Sloot final output key only 1Kb in size. As written in the book "De broncode" Sloot used 5 algorithms where he needed 12Mb for each algorithm what included storage for temporary calculation. He was working on a new application what needed 74Mb for each algorithm to store the temporary calculations for longer movie/TV programs, probably to store the bigger amount of frame keys after the 1Kb input key was decoded. The advantage of Sloot system was that it was possible; to add in every electronic device the processors with the algorithm included the reference memory and memory for the temporary calculations storage. After that only a one 1Kb key code for every movie or TV program was needed to generate the frames for displaying it at a display device.

Let's say one movie/program frame is 1024x640=655,360pixels
According to Jan Sloot second patent:
One block is 16x16=256 pixels
And 64 blocks are one row
Then there are 655,360/256=2,560 blocks in a frame
And 655,360/(256*64)=40 rows in a frame

If there are 25 frames a second and a movie is 90 minutes then:
There are 655,360x25x60x90=88,473,600,000 pixels in a movie/program
88,473,600,000/256=345,600,000 blocks in a movie/program
88,473,600,000/64=5,400,000 rows in a movie/program
88,473,600,000/38.125=135,000 frames in a movie/program

sdcs.gif


Figure 3 explanation:

30 reference memory contains all possible pixel values (colour values 256 or 2560 or 102400)
31 1st (de)coding part(*) compares every decoded pixel value with the reference memory (30)
32 pixel memory store pixel codes, 256 pixel values stored
33 2nd (de)coding part generate a block code from 256 pixels
34 block memory store block codes, 64 block values stored
35 3rd (de)coding part generate a row code from 64 blocks
36 row memory store row codes, 40(**) row values stored
37 4th (de)coding part generate a frame code from 40(**) rows
38 frame memory store frame codes, 135.000(***) frame values stored
39 5th (de)coding part generate a movie/program code from 135.000(***) frames
40 movie/program memory store movie/program codes, 1Kb each

* Also digital video signal input.
** Frame pixel size depended.
*** Frames a second and movie/program length depended.


41 key processor decoding part check if all blocks, rows and frames are only stored once and that in case of double ones only coordinates are stored
42 storage (chip card) keep a copy of the movie/program memory (40) and calculations from the key processor (41)
43 input-output equipment (chip card reader)
44 key processor coding part(*) stores the movie/program code in the movie/program memory (40)

* Also digital video signal output.

In the above example pixels are used but it's also possible with audio or text.
Details about the reference memory storage and the key code algorithms are not explained in this patent description.
If for example a video input pixel is 1byte then for example every coding part (5 in total) must generate an output key 40 times smaller then the input data to end with a 1Kb key.
88,473,600,000bytes/(40x40x40x40x40)=864bytes (without audio).
 

Hitokage

Setec Astronomer
xabre: Basically it says that you can expect to compress a file to 1/40th its original size, then do it again and get equal results, then do it 2 more times to get a 1kb compressed movie.
 

Ghost

Chili Con Carnage!
I discovered the cure for cancer in a dream, right down to the exact dosage for each type of cancer, if only i remembered my dreams.




Something to do with Cambells Chicken soup and a can of sprite...
 
i remember that other time when alot of the technology guys proclaimed something as the next coming of jesus christ, it was heralded by many as a revolution...


what did we get?

10l.jpg
 

Monk

Banned
88,473,600,000/38.125=135,000 frames in a movie/program

thats 1/2 a byte per frame.



The only way I can see it possible is. Assuming that you have a single algorithmic shape that stays there or moves(for pans etc), you know minor variations and is there for say 256 frames or 9 seconds, you can get can get a for example a 256 pixel single colour square object that would normally take atleast 256 kilobytes into ~10 bytes(4 bytes for colour ~6for square algorithm) reduction space used by a factor of 25 600 But there are elements that always that exist outside formulas. And because of that they can take tons of space. But doing so in 64k is an extreme event where all the algorithm fall into place and there are no outside elements and no change in light which changes the pixels colour slightly.
 

GLoK

Member
The Faceless Master said:
i remember that other time when alot of the technology guys proclaimed something as the next coming of jesus christ, it was heralded by many as a revolution...


what did we get?

http://pcweb.mycom.co.jp/news/2002/04/01/10l.jpg


:lol

this is EXACTLY what I was thinking. I remember reading a quote from Steve Jobbs when the Segway was still heavily under wraps.

It was something along the lines of "Cities will be built around this technology". Something to that effect anyway. I think the only result *I* noticed from this revolution was the chorused laughter of everyone when the first sneak peak video of it was released.

"That's it? A walker with wheels?! HAHAHAHAHA!"
 

duckroll

Member
God's Hand said:
I'm confused. If they know how to do it, why haven't they done it?

They don't. Here's a summary of the thread:

This guy claims to have created something impossible and didn't tell anyone how to do. A few people poured millions into it because they believed it was possible without knowing how it was possible. The guy died before he told anyone how he did this impossible thing. People like silver come on GAF and try to convinced everyone it's actually possible but can't prove it either way because the guy died and since the guy is dead no one will ever know if he truly made the impossible possible. Yeah. :D
 

Dead

well not really...yet
GLoK said:
:lol

this is EXACTLY what I was thinking. I remember reading a quote from Steve Jobbs when the Segway was still heavily under wraps.

It was something along the lines of "Cities will be built around this technology". Something to that effect anyway. I think the only result *I* noticed from this revolution was the chorused laughter of everyone when the first sneak peak video of it was released.

"That's it? A walker with wheels?! HAHAHAHAHA!"
Oh god I remember.

They were all like "IT" will revolutionize the world! Cities will have to be redesigned!

And when it was unveiled :lol :lol
 

Dsal

it's going to come out of you and it's going to taste so good
Hitokage said:
Comments about Wolfram's unoriginality and hyperbole aside, even if you did find such a construct for a desired pattern, you'd still have to define what output was useful and what is to be discarded.

Yep. But I suppose it's possible, although unlikely, that for any given block of data, there is a procedural algorithm that could precisely generate it. Maybe if someone was able to construct a huge database of mappings of all possible block values (heh...) to a generating procedural algorithm it could work. Then they'd just look up the block in the database and only output the procedural algorithm parameters to the file. You could then take the output and repeat the process until there was no further compression realized.
 

sc0la

Unconfirmed Member
this thread is funny as shit :lol

Not to contribute to the lunacy, but what about quantum data sets. If a piece of information can have multiple simultaneous states wouldn't that reduce the amount of sets required to describe something? (sorry about the complete laymen speak). but thats not compression.
 

aaaaa0

Member
border said:
Here is a very technical description of what Jan Sloot was working towards:

http://www.endlesscompression.com/

Could this ever work, in any form? Maybe not movies as a few kilobytes and source files as a few hundred megabytes....but maybe 200MB movies and 1 TB source files? The debunking article is good, though I'm not sure how solid some parts of the rhetoric are ("A source file can't account for every movie because there's an infinite number of movies possible" Huh?).

Short answer: No.

Slightly longer answer: No, it would never work, in any form.

Long answer: to understand why it won't work, read this FAQ: http://www.faqs.org/faqs/compression-faq/part1/section-8.html
 

aaaaa0

Member
Dsal said:
Yep. But I suppose it's possible, although unlikely, that for any given block of data, there is a procedural algorithm that could precisely generate it. Maybe if someone was able to construct a huge database of mappings of all possible block values (heh...) to a generating procedural algorithm it could work. Then they'd just look up the block in the database and only output the procedural algorithm parameters to the file. You could then take the output and repeat the process until there was no further compression realized.

It won't work. The pigeonhole principle will guarantee that I can generate a sequence that won't be covered by any of your procedural algorithms, as long as the number of bits of input to your system is less in length than the number of bits I get back out.

http://www.dogma.net/markn/FAQ.html#Q19
 

Acrylamid

Member
About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb, the only problem is, you'd need really fast computers (or a lot of time) to "decompress" them.
In the file sharing program eMule, each file (<4 GB) gets its unique 128-bit MD4 hash. Wouldn't this mean that when you know the MD4 hash of a certain version of a movie, your PC could create a file starting with 00000...01, hash it, check the newly generated hash with the wanted hash and if the hashes didn't match, create the next file (00000...11). After a long time, your computer would have created the right file, the hashes would match and you would have the movie without having to download it, in a way all the information was contained in the MD4 hash...
http://www.amule.org/wiki/index.php/MD4_hash
Would this be possible with very fast computers and a lot of patience or where is my mistake?
 

iapetus

Scary Euro Man
silver said:
You know, you may all think I'm retarded, but seriously: if somebody discovers a totally new way of compressing data that no one knows about, of course you think "impossible, retarded". Because if you didn't, you'd be the one working on that code.

No.

It's a comforting idea for the tin-foil hat conspiracy theorists, but this isn't a question of different types of compression. It's a question of whether what's being discussed is possible at all, regardless of the approach.

You're like someone posting about a technique someone discovered for turning lead into gold through purely chemical reactions. "Ah," you might cry, "but of course you don't believe it's possible, or you'd be out there turning lead into gold yourselves." But it's still fundamentally impossible, to the point that we don't care what you think the technique is, or how many incomprehensible pseudo-science websites you point us at.

Same holds true with this compression issue.
 

iapetus

Scary Euro Man
Acrylamid said:
About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb

You can create a hash of any size.

So let's take your idea further - from a movie, we create a single-bit hash. Then from that single bit of data we can reconstitute the original movie through the process you describe.

Here are some movies for you to download:

Spiderman 2: 0
Constantine: 1
Battlefield Earth: 1
Star Wars Episode 2: 1
Robots: 0

Enjoy.
 

Hitokage

Setec Astronomer
Dsal said:
Yep. But I suppose it's possible, although unlikely, that for any given block of data, there is a procedural algorithm that could precisely generate it. Maybe if someone was able to construct a huge database of mappings of all possible block values (heh...) to a generating procedural algorithm it could work. Then they'd just look up the block in the database and only output the procedural algorithm parameters to the file. You could then take the output and repeat the process until there was no further compression realized.
It'd be easier in a programming sense to just itterate digits of pi until you came across the desired values. :p
 

Nerevar

they call me "Man Gravy".
Acrylamid said:
About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb, the only problem is, you'd need really fast computers (or a lot of time) to "decompress" them.
In the file sharing program eMule, each file (<4 GB) gets its unique 128-bit MD4 hash. Wouldn't this mean that when you know the MD4 hash of a certain version of a movie, your PC could create a file starting with 00000...01, hash it, check the newly generated hash with the wanted hash and if the hashes didn't match, create the next file (00000...11). After a long time, your computer would have created the right file, the hashes would match and you would have the movie without having to download it, in a way all the information was contained in the MD4 hash...
http://www.amule.org/wiki/index.php/MD4_hash
Would this be possible with very fast computers and a lot of patience or where is my mistake?


Hashing isn't a 1-1 match. In fact, you can "fake" MD4 hashes (although it is supposedly quite difficult). So you can't go backwards and try to make a file that matches the hash.
 
Acrylamid said:
About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb, the only problem is, you'd need really fast computers (or a lot of time) to "decompress" them.
In the file sharing program eMule, each file (<4 GB) gets its unique 128-bit MD4 hash. Wouldn't this mean that when you know the MD4 hash of a certain version of a movie, your PC could create a file starting with 00000...01, hash it, check the newly generated hash with the wanted hash and if the hashes didn't match, create the next file (00000...11). After a long time, your computer would have created the right file, the hashes would match and you would have the movie without having to download it, in a way all the information was contained in the MD4 hash...
http://www.amule.org/wiki/index.php/MD4_hash
Would this be possible with very fast computers and a lot of patience or where is my mistake?

Let's do a little test run on a small file. 2kb sound reasonable to you?


2kb = 2048 bytes = 16384 bits
To brute force this, you would have to try only 2^16384 combinations.

Let's say that it took one microsecond to generate a file and test a combination.

Generating the file would take 3.7E4912 years.

I'm not *that* patient.

Granted, I'm not good at math, but I think I did this right.
 
gofreak said:
full1.jpg


This is the best part of this thread. Most impressive. 96KB!!

(I kinda got "stuck" midway through, controls are clunky etc. but..wow)

I wonder if Will Wright hired these guys? I wonder how readable their code is? :lol


The produckt relies heavily on model instancing and positioning and deforming those instances with basic values. I believe the Jak and Daxter engine does something quite similar for the creation of the environments. The level editor would be 3d-space where you bring in the environment model and duplicate it over and over again then only export the coordinates as opposed to the total mesh you've created. Very clever concept though limited to the number of base objects you bring in.
 

CrunchyB

Member
border said:
What truly amazes me is that a man as Roel Pieper, who is a professor of Computer Science no less, could fall for his story, to the point where he invested a huge amount of capital. If his role in this story is really as reported in the media, his credibility as a computer scientist has been seriously tarnished. In my opinion, the University of Twente, with which Pieper is associated, should at least perform an internal investigation, to assess whether Pieper's position can be maintained.

I study CS at the University of Twente. Yeah, Pieper is a bit of joke here, I've heard people refer to Sloot (Dutch for ditch) & Pieper (Dutch for potato) as "the people who could store a byte in a bit".
Pieper isn't a proper professor, he's just a PR front. He doesn't actually do anything but give the occasional nonsensical visionary lecture. The only reason why he's associated with this university is because he used to run Philips which is a huge company, research wise.

This whole Sloot deal is embarassing but no big deal.
 

aaaaa0

Member
Hitokage said:
It'd be easier in a programming sense to just itterate digits of pi until you came across the desired values. :p
Yup. Though even if you did that, you'd need to store the offset into pi for someone else to get the data back.

You can show that the offset would have to be, on average, just as long as the data you wanted to generate (and that's only if the digits of pi are completely random, which I don't think has been proven yet).
 

iapetus

Scary Euro Man
aaaaa0 said:
Yup. Though even if you did that, you'd need to store the offset into pi for someone else to get the data back.

No you wouldn't. Just store that offset in the same way, thus saving even more space! I'm a genius! Now, time to go get some venture capital funding...
 

Hitokage

Setec Astronomer
iapetus said:
No you wouldn't. Just store that offset in the same way, thus saving even more space! I'm a genius! Now, time to go get some venture capital funding...
Recursive endless encryption... :lol
 

Monk

Banned
I wonder if Will Wright hired these guys? I wonder how readable their code is?

:lol I can imagine it now, no internal documentation whatsoever and it looks like one big mathematical formula.
 

Dsal

it's going to come out of you and it's going to taste so good
aaaaa0 said:
It won't work. The pigeonhole principle will guarantee that I can generate a sequence that won't be covered by any of your procedural algorithms, as long as the number of bits of input to your system is less in length than the number of bits I get back out.

http://www.dogma.net/markn/FAQ.html#Q19

Yeah about 5 minutes after I posted that I figured that out :D, so yeah, the precisely thing ain't gonna happen. The "offsets into procedural sequence" thing could still work too though for some of the blocks.

If someone generated a big database of block values that could be procedurally generated, you could recursively try to find those patterns in the data and shrink things down. You could compress stubborn non-matching blocks with more traditional compression as well.

I might try a quick experiment with this stuff. It won't work but it'll be fun to try :D
 

fart

Savant
jesus christ i can't fucking believe you people. this is a load of bullshit. do some reading and think for yourself for a fucking change.

can you compress a movie to a single bit? yes. would you lose every piece of information that makes the movie unique? yes.
 

Coen

Member
CrunchyB said:
I study CS at the University of Twente. Yeah, Pieper is a bit of joke here, I've heard people refer to Sloot (Dutch for ditch) & Pieper (Dutch for potato) as "the people who could store a byte in a bit".
Pieper isn't a proper professor, he's just a PR front. He doesn't actually do anything but give the occasional nonsensical visionary lecture. The only reason why he's associated with this university is because he used to run Philips which is a huge company, research wise.

This whole Sloot deal is embarassing but no big deal.

I do feel Pieper is a visionaire. I know his ideas are somewhat out of the ordinary, but in essence, his thoughts hold a lot of truth. The story is true, I don't know about those new compression tools. He had some pretty having investing behind him, especially for a one-man-plan. It has been suggested that he was killed because of his ideas.
 
Status
Not open for further replies.
Top Bottom