border said:Could this ever work, in any form? Maybe not movies as a few kilobytes and source files as a few hundred megabytes....but maybe 200MB movies and 1 TB source files?
Short answer: No.
Long answer: Um... no.
border said:Could this ever work, in any form? Maybe not movies as a few kilobytes and source files as a few hundred megabytes....but maybe 200MB movies and 1 TB source files?
Have you ever heard of the "dot com" era? Supposedly smart people fell for all kinds of shit, and they even threw money away on stupid ideas!silver said:But everyone, tell me, HOW do people like Pieper, Wang and Philips and CA scientists fall for a crackpot trick?
The question why industry heads would get behind this was raised, and the answer's simple: when supposedly new technology comes around, you get behind it as quickly as possible. These industry heads had a tech demo, a crackpot explanation to work off of, and nothing more. By investing a relatively small amount on an unproven, yet potentially revolution idea, they're safeguarding their own interests. If the idea fails, whoops. If it does, you're in the money. If a revolutionary product presents itself, it's a good idea to be in on it--even if it fails, you aren't much worse off.
Compression is the result of mapping the contents of one set of data to the contents of another, smaller set of data. As others have eloquently pointed out, the level of compression being claimed is utterly absurd.
Comments about Wolfram's unoriginality and hyperbole aside, even if you did find such a construct for a desired pattern, you'd still have to define what output was useful and what is to be discarded.As Wolfram pointed out in his unnecessarily large and mostly boring book "A New Kind of Science", things of very high complexity can be described by small rulesets (like 4 logical operations) given small inputs (as small as 2 bits)... and enough free memory to output to of course.
Jan Sloot's principle looks like that of Klaus Holtz with the different that Sloot made a fixed static reference memory with all the unique data already in, while Holtz made it dynamic as a self learning system, also was Sloot final output key only 1Kb in size. As written in the book "De broncode" Sloot used 5 algorithms where he needed 12Mb for each algorithm what included storage for temporary calculation. He was working on a new application what needed 74Mb for each algorithm to store the temporary calculations for longer movie/TV programs, probably to store the bigger amount of frame keys after the 1Kb input key was decoded. The advantage of Sloot system was that it was possible; to add in every electronic device the processors with the algorithm included the reference memory and memory for the temporary calculations storage. After that only a one 1Kb key code for every movie or TV program was needed to generate the frames for displaying it at a display device.
Let's say one movie/program frame is 1024x640=655,360pixels
According to Jan Sloot second patent:
One block is 16x16=256 pixels
And 64 blocks are one row
Then there are 655,360/256=2,560 blocks in a frame
And 655,360/(256*64)=40 rows in a frame
If there are 25 frames a second and a movie is 90 minutes then:
There are 655,360x25x60x90=88,473,600,000 pixels in a movie/program
88,473,600,000/256=345,600,000 blocks in a movie/program
88,473,600,000/64=5,400,000 rows in a movie/program
88,473,600,000/38.125=135,000 frames in a movie/program
Figure 3 explanation:
30 reference memory contains all possible pixel values (colour values 256 or 2560 or 102400)
31 1st (de)coding part(*) compares every decoded pixel value with the reference memory (30)
32 pixel memory store pixel codes, 256 pixel values stored
33 2nd (de)coding part generate a block code from 256 pixels
34 block memory store block codes, 64 block values stored
35 3rd (de)coding part generate a row code from 64 blocks
36 row memory store row codes, 40(**) row values stored
37 4th (de)coding part generate a frame code from 40(**) rows
38 frame memory store frame codes, 135.000(***) frame values stored
39 5th (de)coding part generate a movie/program code from 135.000(***) frames
40 movie/program memory store movie/program codes, 1Kb each
* Also digital video signal input.
** Frame pixel size depended.
*** Frames a second and movie/program length depended.
41 key processor decoding part check if all blocks, rows and frames are only stored once and that in case of double ones only coordinates are stored
42 storage (chip card) keep a copy of the movie/program memory (40) and calculations from the key processor (41)
43 input-output equipment (chip card reader)
44 key processor coding part(*) stores the movie/program code in the movie/program memory (40)
* Also digital video signal output.
In the above example pixels are used but it's also possible with audio or text.
Details about the reference memory storage and the key code algorithms are not explained in this patent description.
If for example a video input pixel is 1byte then for example every coding part (5 in total) must generate an output key 40 times smaller then the input data to end with a 1Kb key.
88,473,600,000bytes/(40x40x40x40x40)=864bytes (without audio).
silver said:So does it work or not?
The Faceless Master said:i remember that other time when alot of the technology guys proclaimed something as the next coming of jesus christ, it was heralded by many as a revolution...
what did we get?
http://pcweb.mycom.co.jp/news/2002/04/01/10l.jpg
God's Hand said:I'm confused. If they know how to do it, why haven't they done it?
Oh god I remember.GLoK said::lol
this is EXACTLY what I was thinking. I remember reading a quote from Steve Jobbs when the Segway was still heavily under wraps.
It was something along the lines of "Cities will be built around this technology". Something to that effect anyway. I think the only result *I* noticed from this revolution was the chorused laughter of everyone when the first sneak peak video of it was released.
"That's it? A walker with wheels?! HAHAHAHAHA!"
EviLore said:This is a laugh riot.
Hitokage said:Comments about Wolfram's unoriginality and hyperbole aside, even if you did find such a construct for a desired pattern, you'd still have to define what output was useful and what is to be discarded.
border said:Here is a very technical description of what Jan Sloot was working towards:
http://www.endlesscompression.com/
Could this ever work, in any form? Maybe not movies as a few kilobytes and source files as a few hundred megabytes....but maybe 200MB movies and 1 TB source files? The debunking article is good, though I'm not sure how solid some parts of the rhetoric are ("A source file can't account for every movie because there's an infinite number of movies possible" Huh?).
Dsal said:Yep. But I suppose it's possible, although unlikely, that for any given block of data, there is a procedural algorithm that could precisely generate it. Maybe if someone was able to construct a huge database of mappings of all possible block values (heh...) to a generating procedural algorithm it could work. Then they'd just look up the block in the database and only output the procedural algorithm parameters to the file. You could then take the output and repeat the process until there was no further compression realized.
Monk said:In theory you could even make a doom clome with bump mapping in 64k
Here is one at 96k
http://www.theprodukkt.com/kkrieger.html
silver said:You know, you may all think I'm retarded, but seriously: if somebody discovers a totally new way of compressing data that no one knows about, of course you think "impossible, retarded". Because if you didn't, you'd be the one working on that code.
Acrylamid said:About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb
It'd be easier in a programming sense to just itterate digits of pi until you came across the desired values.Dsal said:Yep. But I suppose it's possible, although unlikely, that for any given block of data, there is a procedural algorithm that could precisely generate it. Maybe if someone was able to construct a huge database of mappings of all possible block values (heh...) to a generating procedural algorithm it could work. Then they'd just look up the block in the database and only output the procedural algorithm parameters to the file. You could then take the output and repeat the process until there was no further compression realized.
Acrylamid said:About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb, the only problem is, you'd need really fast computers (or a lot of time) to "decompress" them.
In the file sharing program eMule, each file (<4 GB) gets its unique 128-bit MD4 hash. Wouldn't this mean that when you know the MD4 hash of a certain version of a movie, your PC could create a file starting with 00000...01, hash it, check the newly generated hash with the wanted hash and if the hashes didn't match, create the next file (00000...11). After a long time, your computer would have created the right file, the hashes would match and you would have the movie without having to download it, in a way all the information was contained in the MD4 hash...
http://www.amule.org/wiki/index.php/MD4_hash
Would this be possible with very fast computers and a lot of patience or where is my mistake?
Acrylamid said:About this whole compression issue... there's an idea I had a while ago, that could shrink every movie down to less than a kb, the only problem is, you'd need really fast computers (or a lot of time) to "decompress" them.
In the file sharing program eMule, each file (<4 GB) gets its unique 128-bit MD4 hash. Wouldn't this mean that when you know the MD4 hash of a certain version of a movie, your PC could create a file starting with 00000...01, hash it, check the newly generated hash with the wanted hash and if the hashes didn't match, create the next file (00000...11). After a long time, your computer would have created the right file, the hashes would match and you would have the movie without having to download it, in a way all the information was contained in the MD4 hash...
http://www.amule.org/wiki/index.php/MD4_hash
Would this be possible with very fast computers and a lot of patience or where is my mistake?
gofreak said:![]()
This is the best part of this thread. Most impressive. 96KB!!
(I kinda got "stuck" midway through, controls are clunky etc. but..wow)
I wonder if Will Wright hired these guys? I wonder how readable their code is? :lol
:lol :lol :lol :loliapetus said:Here are some movies for you to download:
Spiderman 2: 0
Constantine: 1
Battlefield Earth: 1
Star Wars Episode 2: 1
Robots: 0
Enjoy.
border said:What truly amazes me is that a man as Roel Pieper, who is a professor of Computer Science no less, could fall for his story, to the point where he invested a huge amount of capital. If his role in this story is really as reported in the media, his credibility as a computer scientist has been seriously tarnished. In my opinion, the University of Twente, with which Pieper is associated, should at least perform an internal investigation, to assess whether Pieper's position can be maintained.
Yup. Though even if you did that, you'd need to store the offset into pi for someone else to get the data back.Hitokage said:It'd be easier in a programming sense to just itterate digits of pi until you came across the desired values.![]()
aaaaa0 said:Yup. Though even if you did that, you'd need to store the offset into pi for someone else to get the data back.
Recursive endless encryption... :loliapetus said:No you wouldn't. Just store that offset in the same way, thus saving even more space! I'm a genius! Now, time to go get some venture capital funding...
I wonder if Will Wright hired these guys? I wonder how readable their code is?
aaaaa0 said:It won't work. The pigeonhole principle will guarantee that I can generate a sequence that won't be covered by any of your procedural algorithms, as long as the number of bits of input to your system is less in length than the number of bits I get back out.
http://www.dogma.net/markn/FAQ.html#Q19
CrunchyB said:I study CS at the University of Twente. Yeah, Pieper is a bit of joke here, I've heard people refer to Sloot (Dutch for ditch) & Pieper (Dutch for potato) as "the people who could store a byte in a bit".
Pieper isn't a proper professor, he's just a PR front. He doesn't actually do anything but give the occasional nonsensical visionary lecture. The only reason why he's associated with this university is because he used to run Philips which is a huge company, research wise.
This whole Sloot deal is embarassing but no big deal.