Both help compose a larger group of experts in the field. The thing is, if all you're going by are exclusive the thoughts of computer scientists (which are damn important, mind you) then you're missing a bigger picture of ideas. And Max Tegmark is a physicist on paper but an expert at this subject and even recently published a book on the topic. I'd say that's knowing something.
This is all semantics to some degree, as the larger issue is: if you're basically asking somebody to give you the solid proof that we will 100% get anything in the future, that is impossible. Nobody can tell you for sure what happens with any field of research (to a reasonable degree), because it's all theory until it's reality. Including such things as effects of global warming. But why not at least pay attention to it and prepare.
The problem with stuff like AI is we might not have time to deal with it once it's on our doorstep. It will be too late to kickstart a global initiative for AI ethics, security, and transparent research. It's already blowing past the station at that point and taking us all with it.
I'm not just talking about the computer scientists, though. I'd be perfectly fine with Bostrom and Tegmark if they were actually conducting empirical research into the topic rather than running workshops, advocating and writing books. Oh, and about the book part. I don't think book publishing is a sign of expertise so much as being downright suspicious. Real scientists publish papers, not books, because papers are peer reviewed and required to be worth a damn. Any random individual can publish a book provided people be willing to buy it. Guess who's a best-selling author? Deepak Chopra. Guess what Chopra isn't? A published, cited or respected expert in a scientific field.
Books are fine for publishing pop-sci explanations to laypeople. Stephen Hawking's books are great. But they should be relegated to simply explain established consensus, not present controversial conclusions. The problem is that Bostrom, Tegmark publish books (also blogs, magazine articles, videos...) doing exactly that, which raises all kinds of red flags.
Also: when climate scientists present theoretical work on the impact of climate change, they have
tons of real world, empirical data to back those theories up with. And even then, they're perfectly fine with stating that they can only really say that things will change in a lot of places but that it's difficult to say in what way. That's not what AI doomsayer's are doing; they're doing the opposite in fact - they make definitive, sweeping predictions with basically no backing for it. Even the singularity you're arguing as being the core threat of AI - and the supposed reason we can't wait to understand the issue before we act to prevent it (however that works) - is completely speculative.