Spoiled Milk
Banned
Has there... ever actually been a reason for having both public and private fields in the same class?
				
			I'm curious, there's a reason for not having them?Has there... ever actually been a reason for having both public and private fields in the same class?
The reason is syntactic convenience. Having to mark fields public sucks and I'd rather just declare something "data" and be done with it.I'm curious, there's a reason for not having them?
Youu're implying that both can be useful, but not for the same objects?
The reason is syntactic convenience. Having to mark fields public sucks and I'd rather just declare something "data" and be done with it.But also slightly semantic... It would seem odd if both were present in the same class, you know?
If you aren't already, use pytest. Unittest requires too much boilerplate and pytest has helpful features like parametrize to easily add a lot of tests for things like bad input testing.The idea is interesting, and I'm impressed by the efficiency. Though I've hated it because of how difficult sometimes it is to deal with all the fights between Sun, Oracle, IBM, Microsoft and the like. I never liked how the execution chain is a slightly shadowy thing.
But for all the dislike (and more) I have with Java about the syntax and philosophy, and the installation/management issues of the JVM, there's languages I like and use that work on JVM. Like Scala (though I sometimes wish it they kept the .net comptability)
I should try it indeed... Thanks for reminding me. Though I should have said that I was talking mostly about IDE for beginners.
Will definitively look into it.
For all the things I do in Python these days, I'm still not convinced it's a great language for very large projects. The lack of any "static" type checking makes extensive tests a necessity, and it quickly explodes. It's not uncommon to spend more time writing tests than code, but in Python, I sometimes feels like it's a "quadratic" task :/
The reason is syntactic convenience. Having to mark fields public sucks and I'd rather just declare something "data" and be done with it.But also slightly semantic... It would seem odd if both were present in the same class, you know?
I do... The issue is rather the amount of tests you need to write since you need a perfect coverage even more in Python than in other languages...If you aren't already, use pytest.
Well, that's exactly my problem, and I share your opinion.Also, write yourself a type checking decorator. [...] I do agree that at a certain size, programming in python just doesn't make that much sense. I know python people are all about duck typing and extensive type checking isn't pythonic but screw that.
Definitively... Still, with custom types, since basically all operators are syntaxic sugar for overloadable functions, you can't expect even types you think you know well to behave properly.It could be worse. Could be like perl where a comparison operator could change the underlying types of objects you're comparing. Just gross.
>>> x = numpy.uint64(2**55)
>>> x+1 == x
True
	not(uint64)( ((double)x) + ((double) 1) )
x + ((uint64) 1)
>>> help(numpy.ndarray.T)
Help on getset descriptor numpy.ndarray.T:
T
    Same as self.transpose(), except that self is returned if
    self.ndim < 2.
    [...]
>>> A = numpy.array([0.])
>>> A.ndim
1
>>> A.T is A
False
>>> id(A)
41728192
>>> id(A.T)
45959568
	So if it's "just data", is it accessible to external users of the class or not?
class Test (val field1: String, var field2: String)
	val test = new Test("a", "b")
val a = Test.field1
	Isn't that the whole idea of a data class?
http://neogaf.com/showthread.php?t=1425296is there a linux/unix thread? I'm attempting to run google music manager from a headless server (allowing music files downloaded to a seedbox to be automatically uploaded to google play music, essentially) but I am coming up short and I'm not sure where to ask for help
Disagree. ML languages love to hide implementations. There is nothing unique about functional languages that allow them to forget about invariants.Functional programmers agree with this, functions on data, no private state. In OO land it's because you're trying to abstract something away. Computed properties for example, maybe you don't expose "Radius" because it's an implementation for a circle but you do expose "Area".
You get bothSo if it's "just data", is it accessible to external users of the class or not?
Have you guys seen this? https://www.devrant.io/feed/
#include <random>
class UniformRNG {
public:
    UniformRNG (const int seed, const int N) { ... } /* Init generator with seed, set distributions */
    const int gen_lattice_site() { return uniform_int(engine); }
    const float gen_real() { return uniform_real(engine); }
private:
    mt19937 engine; // Mersenne-Twister engine
    uniform_int_distribution<int> uniform_int;
    uniform_real_distribution<float> uniform_real;
};
[...]
// Init the generator
UniformRNG rng(seed, N);
[...]
// Use in function
int i = rng.gen_lattice_site();
int j = rng.gen_lattice_site();
	init_genrand(seed);
[...]
// Use in function
auto i = (int) gen_real2() * L; // Or static_cast<int>(...)
auto j = (int) gen_real2() * L;
	// in UniformRNG
const int gen_lattice_site() { return (int) uniform_real(engine) * L; }
	This is with g++ (GCC) 7.1.1 and "-O3 -std=c++14".Are you compiling with full optimizations? What compiler? And what does your benchmark program look like?
My work sent myself and another employee to Microsoft Build this year, but I don't really have any super awesome tips for you unfortunately. XD MS Build was fun but a lot of the panels weren't directly related to my day-to-day work, that said I tried to attend panels about stuff I've heard of but haven't really worked with yet. Also, lots of Azure panels since we're planning on migrating soon.My work is sending me to the VS Live event in Orlando this year.
Does anyone here have any experience with these conventions? Anything specific I should be looking out for or tips to make the most of it?
This is with g++ (GCC) 7.1.1 and "-O3 -std=c++14".
My benchmark is simply using "time" in my shell. I realize that's very blunt and measures everything, not just the number generation. I was just curious when rewriting the code for a lab and use this as a general measure to not slow the program down a lot. The total time variance is +/- 0.01s for multiple runs.
It's not a stupid question, buffer management and size querying is the source of endless nastiness in C APIs.I'm not sure if this is the place to be asking questions (apologizes if not!), but here's my stackoverflow post:
https://stackoverflow.com/questions/45993330/c-using-scanf-on-strings
I've only just started using C, so forgive the stupid question![]()
I'm not sure if this is the place to be asking questions (apologizes if not!), but here's my stackoverflow post:
https://stackoverflow.com/questions/45993330/c-using-scanf-on-strings
I've only just started using C, so forgive the stupid question![]()
If anyone here has used the GAF live thread extension(where it auto loads newer pages without needing to refresh) I'd like to develop something like that for another forum I frequent, how would I go about doing this and where would I start.
I'm not afraid that students would mix up the two languages, I think it's just mostly a waste of time to use two different languages or environments at first. My philosophy is that core programming skill and intuition is learned through building and managing more and more complex systems, and anything else is initially a distraction that should be minimized: language syntax, libraries, tools etc. Getting decently fluent in one environment is a necessary investment to get over those distractions. Moving to another environment resets the situation and pulls the focus off the stuff that matters.Any reason? By fear of mixing the ideas of the two languages?
...
Still, I have a lot of students that learn Python and Caml at the same time, several of them being beginners. I won't say there's never an oddity (how many time I saw a complex Python function to make it tail recursive... when Python doesn't have TRC?) but it seems that they do fine.
If anyone here has used the GAF live thread extension(where it auto loads newer pages without needing to refresh) I'd like to develop something like that for another forum I frequent, how would I go about doing this and where would I start.
I don't really know anything about browser extensions but my understanding is that for something like that to work in the background you need to use long polling, this means constantly sending GET requests to the page every set interval (5 seconds, 1 second, 2 minutes etc...). Then you could use AJAX to manipulate the DOM in realtime without needing to refresh the page.
This is with g++ (GCC) 7.1.1 and "-O3 -std=c++14".
My benchmark is simply using "time" in my shell. I realize that's very blunt and measures everything, not just the number generation. I was just curious when rewriting the code for a lab and use this as a general measure to not slow the program down a lot. The total time variance is +/- 0.01s for multiple runs.
randint():
        sub     rsp, 5016
        xor     ecx, ecx
        mov     edx, 1
        mov     QWORD PTR [rsp], 0
.L137:
        mov     rax, rcx
        shr     rax, 30
        xor     rax, rcx
        imul    rax, rax, 1812433253
        lea     ecx, [rax+rdx]
        mov     QWORD PTR [rsp+rdx*8], rcx
        add     rdx, 1
        cmp     rdx, 624
        jne     .L137
        mov     QWORD PTR [rsp+4992], 624
.L138:
        mov     rdi, rsp
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        cmp     rax, 2147483647
        ja      .L138
        add     rsp, 5016
        ret
	randuint():
        sub     rsp, 5016
        xor     ecx, ecx
        mov     edx, 1
        mov     QWORD PTR [rsp], 0
.L35:
        mov     rax, rcx
        shr     rax, 30
        xor     rax, rcx
        imul    rax, rax, 1812433253
        lea     ecx, [rax+rdx]
        mov     QWORD PTR [rsp+rdx*8], rcx
        add     rdx, 1
        cmp     rdx, 624
        jne     .L35
        mov     rdi, rsp
        mov     QWORD PTR [rsp+4992], 624
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        add     rsp, 5016
        ret
	Doesn't he compare std::random to the original version of Mersenne Twister?
I wonder what are the two algorithm you decompiled, but both are related to std::random, no?
#include <random>
#include <stdio.h>
using namespace std;
mt19937 engine;
int test1() {
    int total;
  
    uniform_int_distribution<uint32_t> uniform_int;
    
    for(int i=0; i<50000000; ++i)
        total += uniform_int(engine);
    
    return total;
}
int test2() {
    int total;
  
    uniform_int_distribution<int32_t> uniform_int;
    
    for(int i=0; i<50000000; ++i)
        total += uniform_int(engine);
    
    return total;
}
int main(int argc, char* argv[]) {
    int s = 0;
  
    s += test1();
    s += test2();
    return s;
}
	                0.23    0.00 50000000/150006470     test1() [3]
                0.46    0.00 100006470/150006470     test2() [1]
[2]     37.4    0.70    0.00 150006470         std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()() [clone .constprop.7] [2]
	test1():
        push    rbp
        push    rbx
        mov     ebx, 50000000
        sub     rsp, 8
.L35:
        mov     edi, OFFSET FLAT:engine
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        add     ebp, eax
        sub     ebx, 1
        jne     .L35
        add     rsp, 8
        mov     eax, ebp
        pop     rbx
        pop     rbp
        ret
	test2():
        push    r15
        push    r14
        movabs  rax, 9223372032559808512
        push    r13
        push    r12
        xor     edx, edx
        push    rbp
        push    rbx
        mov     ebx, 2147483647
        sub     rbx, rdx
        mov     r14d, 50000000
        sub     rsp, 104
        mov     QWORD PTR [rsp+80], rax
        mov     eax, 4294967294
        cmp     rbx, rax
        ja      .L139
.L208:
        add     rbx, 1
        add     rax, 1
        xor     edx, edx
        div     rbx
        imul    rbx, rax
        mov     rbp, rax
.L140:
        mov     edi, OFFSET FLAT:engine
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        cmp     rbx, rax
        jbe     .L140
        xor     edx, edx
        div     rbp
.L141:
        movsx   rdx, DWORD PTR [rsp+80]
        add     eax, edx
        add     r15d, eax
        sub     r14d, 1
        je      .L170
.L210:
        movsx   rbx, DWORD PTR [rsp+84]
        mov     eax, 4294967294
        sub     rbx, rdx
        cmp     rbx, rax
        jbe     .L208
.L139:
        mov     eax, 4294967295
        cmp     rbx, rax
        jne     .L209
        mov     edi, OFFSET FLAT:engine
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        movsx   rdx, DWORD PTR [rsp+80]
        add     eax, edx
        add     r15d, eax
        sub     r14d, 1
        jne     .L210
.L170:
        add     rsp, 104
        mov     eax, r15d
        pop     rbx
        pop     rbp
        pop     r12
        pop     r13
        pop     r14
        pop     r15
        ret
	test1():                              # @test1()
        push    rbp
        push    r14
        push    rbx
        sub     rsp, 16
        movabs  rax, -4294967296
        mov     qword ptr [rsp + 8], rax
        mov     ebx, 50000000
        lea     r14, [rsp + 8]
.LBB0_1:                                # =>This Inner Loop Header: Depth=1
        mov     esi, engine
        mov     rdi, r14
        mov     rdx, r14
        call    unsigned int std::uniform_int_distribution<unsigned int>::operator()<std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul> >(std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>&, std::uniform_int_distribution<unsigned int>::param_type const&)
        add     ebp, eax
        dec     ebx
        jne     .LBB0_1
        mov     eax, ebp
        add     rsp, 16
        pop     rbx
        pop     r14
        pop     rbp
        ret
	test2():                              # @test2()
        push    rbp
        push    r14
        push    rbx
        sub     rsp, 16
        movabs  rax, 9223372032559808512
        mov     qword ptr [rsp + 8], rax
        mov     ebx, 50000000
        lea     r14, [rsp + 8]
.LBB1_1:                                # =>This Inner Loop Header: Depth=1
        mov     esi, engine
        mov     rdi, r14
        mov     rdx, r14
        call    int std::uniform_int_distribution<int>::operator()<std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul> >(std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>&, std::uniform_int_distribution<int>::param_type const&)
        add     ebp, eax
        dec     ebx
        jne     .LBB1_1
        mov     eax, ebp
        add     rsp, 16
        pop     rbx
        pop     r14
        pop     rbp
        ret
	Bookmarked, it's just great... Many thanks...A great website is The Compiler Explorer. It's honestly a bit mindblowing.
.L138:
        mov     rdi, rsp
        call    std::mersenne_twister_engine<unsigned long, 32ul, 624ul, 397ul, 31ul, 2567483615ul, 11ul, 4294967295ul, 7ul, 2636928640ul, 15ul, 4022730752ul, 18ul, 1812433253ul>::operator()()
        cmp     rax, 2147483647
        ja      .L138
	That's apparently the case, but I wonder why...Wow, so for some reason GCC is just terrible at this.
uniform_int_distribution<unsigned> uniform;
int k = reinterpret_cast<int>(uniform(engine));
	template<class _IntType>
template<class _URNG>
typename uniform_int_distribution<_IntType>::result_type
uniform_int_distribution<_IntType>::operator()(_URNG& __g, const param_type& __p)
{
    // Always generate uint32 or uint64 as the underlying type, regardless of whether or
    // not the result type is signed.
    typedef typename conditional<sizeof(result_type) <= sizeof(uint32_t),
                                            uint32_t, uint64_t>::type _UIntType;
    // The width of the range that we're interested in is [max - min + 1], as an unsigned.
    const _UIntType _Rp = __p.b() - __p.a() + _UIntType(1);
    // If the range is only one number wide, just return it, there's no randomness.
    if (_Rp == 1)
        return __p.a();
    // What's the max number of digits for this type?
    const size_t _Dt = numeric_limits<_UIntType>::digits;
    // Get the thing that generates random bits for this unsigned type.
    typedef __independent_bits_engine<_URNG, _UIntType> _Eng;
    if (_Rp == 0)
        return static_cast<result_type>(_Eng(__g, _Dt)());
    size_t __w = _Dt - __clz(_Rp) - 1;
    if ((_Rp & (std::numeric_limits<_UIntType>::max() >> (_Dt - __w))) != 0)
        ++__w;
    _Eng __e(__g, __w);
    // Keep trying to get a random number until it's less than the width we're interested in.
    _UIntType __u;
    do
    {
        __u = __e();
    } while (__u >= _Rp);
    return static_cast<result_type>(__u + __p.a());
}
	    class param_type
    {
        result_type __a_;
        result_type __b_;
    public:
        typedef uniform_int_distribution distribution_type;
        // Oh, by default it's [0, max)
        explicit param_type(result_type __a = 0,
                            result_type __b = numeric_limits<result_type>::max())
            : __a_(__a), __b_(__b) {}
        result_type a() const {return __a_;}
        result_type b() const {return __b_;}
        friend bool operator==(const param_type& __x, const param_type& __y)
            {return __x.__a_ == __y.__a_ && __x.__b_ == __y.__b_;}
        friend bool operator!=(const param_type& __x, const param_type& __y)
            {return !(__x == __y);}
    };
	    int64_t total;
  
    uniform_int_distribution<int32_t> uniform_int(std::numeric_limits<int32_t>::min(), std::numeric_limits<int32_t>::max());
    
    for(int i=0; i<50000000; ++i)
        total += uniform_int(engine);
    
    return total;
	I still understand how the signed version of the function can return a negative value if the function call the engine again if the value is above 0x7FFFFFFF (compared as unsigned, so basically a <0 test, no?)
??
And why do it to begin with?
Going
gives you a 2x increase in speed?!Code:uniform_int_distribution<unsigned> uniform; int k = reinterpret_cast<int>(uniform(engine));
Oups, sorry, I'm tired. Make it static_cast (I had something else in mind at first, using a reinterpret_cast<int &>, and I mixed them)This won't compile. You can't reinterpret_cast from unsigned to signed
Somehow... Not the kind of thing you would expect at first, though. And there's still something I don't understand in the assembly with the ja jump. I'll try again when I'm less tiredSo yea, I guess this makes sense.
Just checked, it does...Probably you can get the same speed increase by just including negative numbers in the range.
Oups, sorry, I'm tired. Make it static_cast (I had something else in mind at first, using a reinterpret_cast<int &>, and I mixed them)n![]()
Somehow... Not the kind of thing you would expect at first, though. And there's still something I don't understand in the assembly with the ja jump. I'll try again when I'm less tired![]()
I agree... I should have said that the students here that learn Python and ML almost at the same time are among the best students, they're not the norm.I think several languages / environments is especially bad for the least gifted students. They get overwhelmed and bogged down in details.
Most probably. Though I wonder what "later" is (probably heavily depends on the people involved and on the time they spent on learning)Learning more languages, at least those with different paradigms is obviously useful or even necessary for later growth. But it's much more efficient to do it later
(smacking the head on the desk noise)static_cast is undefined behavior. Signed overflow
Yes, but how do you get negative random numbers? That's what I don't understand.The ja is to try again if it's not in [0, int_max)
(smacking the head on the desk noise)
I love how I did most lower-level stuff using simple casts (like (int)) in the past which were working perfectly well. Then, since I do C++x11, I'm doing less and less of those dirty things.
Make it (int)(x) then
Yes, that's not nice, reinterpret_cast was indeed the good solution, but with the need to use memory.
Yes, but how do you get negative random numbers? That's what I don't understand.
But seeing how I'm dumb tonight, that must be something obvious I missed.
The think is, I don't see where they add the offset, and if you want a number in the [-2^31 .. 2^31-1) range, you should create one in the [0 .. 2^32-1) range, no?Well if you allow them in your range, then that ja condition will be different. The algorithm generates an *offset* to be used from the start of the range, so if your minimum value is negative, and the offset is smaller than the absolute value of the min, you'll get a negative number