As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.
As a first book, I think Children of Time is much better than Shards of Earth. I enjoyed both series but would say the third book in each was the weakest. The Final Architecture series had a slightly stronger third entry.
And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.
Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.
I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.
Thanks. Very interesting. I’m not sure I see such a stark contrast pre/post 9-11. However, the idea that the US public’s approach to the post-9-11 conflict would have an influence makes sense and isn’t something I’d ever have considered on my own.
Me too, but I’d put Usenet in there before Slashdot.
Spock, Uhura, Chapel, heck even M’Benga don’t make it a prequel, but a lieutenant Kirk does?
Because most people aren’t technical enough to understand there are alternatives, particularly if those alternatives involve removing a scary label telling you not to.
The South. Just below Indiana, the middle finger of the South. And I say this as a Hoosier for much of my life.
As a guy responsible for a 1,000 employee O365 tenant, I’ve been watching this with concern.
I don’t think I’m a target of state actors. I also don’t have any E5 licenses.
I’m disturbed at the opaqueness of MS’ response. From what they have explained, it sounds like the bad actors could self-sign a valid token to access cloud resources. That’s obviously a huge concern. It also sounds like the bad actors only accessed Exchange Online resources. My understanding is they could have done more, if they had a valid token. I feel like the fact that they didn’t means something’s not yet public.
I’m very disturbed by the fact that it sounds like I’d have no way to know this sort of breach was even occurring.
Compared to decades ago, I have a generally positive view of MS and security. It bothers me that this breach was a month in before the US government notified MS of it. It also bothers me that MS hasn’t been terribly forthcoming about what happened. Likely, there’s no need to mention I’m bothered that I’m so deep into the O365 environment that I can’t pull out.
Does the GPL cover having to give redistribution rights to the exact same code used to replicate a certain build of a product?
It does, and very explicitly and intentionally. What it doesn’t say is that you have to make that source code available publically, just that you have to make it available to those you give or sell the binary to.
What Red Hat is doing is saying you have the full right to the code, and you have the right to redistribute the code. However, if you exercise that right, we’ll pull your license to our binaries and you lose access to code fixes.
That’s probably legal under the GPL, though smarter people than me are arguing it isn’t. However, if those writing GPLv2 had thought of this type of attack at the time, I suspect it wouldn’t be legal under the GPL.
I believe you are correct. Any paying Red Hat customer consuming GPL code has the right to redistribute that code. What Red Hat seems to be suggesting is that if you exercise that right, they’ll cut you as a customer, and thus you no longer have access to bug fixes going forward.
I suspect it’s legal under the GPL. I’m certain it violates the spirit of the GPL.
I am not a lawyer, but I have been a follower of FLOSS projects for a long time.
Me too. I know what I’m suggesting is functionally impossible. I’m wondering if it could be done in compliance with the GPL.
All of those contributors have done so using language that says GPLv2 or higher. Specifically says you can modify or redistribute under GPLv2 or later versions. So nothing stops the Linux Foundation from asking new contributors to contribute under the GPLv4 and then releasing the combined work of the new kernel under GPLv4.
The old code would still be available under the GPLv2, but I suspect subsequent releases could be released under a later version and still comply with original contributions.
Again, I know it won’t happen, just like I believe Red Hat’s behavior is within the rules of the GPL. I’d love to hear arguments as to how Red Hat is violating the GPL or reasons why the kernel couldn’t be released under GPLv3 or higher.
I suspect what Red Hat is doing is compatible with GPLv2, which is how the Linux kernel is licensed. I’m certain what they are doing is inimical to the Intent of GPLv2.
That raises some questions and possibilities. It looks like the Linux kernel still has the GPLv2 or later clause, despite not moving to GPLv3. See https://www.kernel.org/doc/html/v4.18/process/license-rules.html
How possible is it to create a GPLv4 that addresses this? Building a new license that does shouldn’t be difficult. However, I’d assume the Linux kernel isn’t released under a GPLv3 or later because of some objections with those changes. I’d imagine creating a GPLv4 that addresses the Red Hat issue but leaves out the changes in GPLv3 is likely a non-starter because those that have chosen a GPLv3 or later license will object.
Given the thousands of contributors to the Linux kernel, is an upgrade to a GPL version higher than v2 even possible? I’ve got no idea, but I’m curious of any insights.
Perfect! Thanks.
My concern is less the VM hosting the docker instance getting compromised but that Lemmy has an exploit and the Lemmy instance getting compromised. I’m quite certain that Lemmy is getting a closer look by the bad guys. You’ve had hundreds of instances spun up in a week, most that have done nothing more than follow an online example of how to spin up a Lemmy instance.
And, I was under the impression that the container and thus the logs were cleared when restarting or redeploying docker. If I’m wrong, I’m horribly embarrassed and will point at that “old school” in the title. I’ll also be doing some testing.
Kids these days with their containers and their pipelines and their devops. Back in my day…
Don’t get me started about the internal devs at work. You’ve already got me triggered.
And, I can just imagine the posts they’re making about how the internal IT slows them down and causes issues with the development cycle.
Nice. I’ll definitely check it out.
I’m intrigued by the phrase “crowdsec security engine on the docker”. Yes, I can Google, but I’d appreciate a bit of comment on what that is and how involved the setup is.
Agreed on all counts. Of course none of that exists on the on the Lemmy docker instance.
The person isn’t talking about automating being difficult for a hosted website. They’re talking about a third party system that doesn’t give you an easy way to automate, just a web gui for uploading a cert. For example, our WAP interface or our on-premise ERP don’t offer a way to automate. Sure, we could probably create code to automate it and run the risk it breaks after a vendor update. It’s easier to pay for a 12 month cert and do it manually.