Oh, no, educated workers who don’t want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.
And open ai is not personal use?
The trajectory was chosen by NASA because the Orion capsule on top of the SLS rocket do not have enough efficiency to be on a low regular lunar orbit while landing and bringing back astronauts. This trajectory has nothing to do with SpaceX.
Nor did I say it did, I said some brain dead idiots sent the contract off to a company who designed a craft incapable of doing what we have done previously, congrats Lockheed for fucking up our next moon program. It’s you who equated that to SpaceX lmaoo
When comparing the one rocket to land on the moon to the 15 launches (thank you for writing launches and not rockets, as Destin Sandlin wrongly did) is because the mass delivered to the surface is gigantic compared to Apollo. Why? Because we do not want to say “we did it!” We want to say “we live there!”.
I mean it really doesn’t matter are you going to have astronauts just chilling for like a year in orbit waiting for those launches, racking up radiation? Saying the reason we need 15 launches for starship is specifically due to mass is such a cop-out. It’s due to how limited the amount of fuel we can send up to refuel in orbit is, it’s fucking stupid at our current level of space infrastructure. We still haven’t even tested it, what we need another 4 decades for this terrible plan to come to fruition? Take note of what the Apolo engineers stated as far as stepping stones in development. If you take too big of leaps, you will not adequately be able to evaluate what when wrong if something does, take to small of steps and you will never reach the goal. We decided to take such massive leaps with no forethought on its efficiency.
Can people stop saying SpaceX rockets explode? They do not.
No, that is precisely what occurred with starship. You can see the Shockwave from the explosion, which means you had the oxidizer mix with the propelent before exploding during the flip phase, that’s a major fucking failure. It was not a rupture like previous issues nor was it terminated, it fucking exploded lmao. The worst part all that lovely telemetry that’s gonna help them out gave zero indication of said catastrophic failure so that’s gonna be such great info for them right? Just like the first test that failed when they knew the pad wouldn’t be strong enough and caused damage to the rocket, meaning they got no actionable data?
As of now, and evolving for Starship:
$7B cost, 4 from NASA for the first 2 missions
11 years for the first tests, still no rocket
Can bring 220,00lb and 35,000ft³ to the moon
And they still and up with a rocket NASA can continue to use at very low price (less than 25% than SLS per mission)
Star ship has not been a proven concept and is still actively in development, these numbers mean nothing right now. With massive issues looming and 90% of what’s needed not even tested yet but go ahead keep riding daddy musk as if he isn’t killing good ideas with lofty moving goal posts and a complete lack of understanding for what’s being developed.
Your description is how pre-llm chatbots work
Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.
Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.
Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.
Emergent properties are literally the only reason llms work at all.
No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.
I’m just gonna leave this here as you want to buy into all the bullshit surrounding starship lmao
No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.
And I’d like to see that contract hold up in court lol
You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.
That’s not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That’s all the paper is describing.
If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”
It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.
And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.
AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.
Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.
Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.
this enables the company to raise more capital by borrowing against its equity
You can always get asset backed loans, even as a company, why should we be welfare for businesses?
Also you would need an uncaptured market for anything you said to even have an effect, when 90% of trades are completed off market not effecting the price on the tape are we really doing anything but getting fleeced by market makers? You aren’t signaling anything when your trade data is being bought and hidden from the market using PFOF techniques.
In light of the objective failures of our market it’s extremely fair to say shareholders have no contribution to the delivery of goods and services. Could they in a perfect market sure, but I could have everything in utopia, to bad that doesn’t exist.
Apolo program with 60s tech: we will send one rocket per mission to the moon, and it will work.
Brain dead idiots parroting off spaceX as some savior: it will only take at least 15 rocket launches per mission to the moon. We will use the worst trajectory possible because we sold the contract for the lander to a company who can’t figure out low moon orbit. 2 years out and our rocket still blows up when attempting launches.
But sure spaceX is a marvel of private industry, shudders
Yeah that’s why we were supposed to have made it back to Mars this year with SpaceX right? Thats why it took them over 3 minutes to even realize their ship blew up most recently, but that telemetry that took 3minutes to realize a catastrophic failure occurred is really gonna make this great, right? That’s why Apolo sent one rocket per mission to the moon and with that amazing SpaceX tech…we need to send at least 15 per mission? The public sector did take risks and by doing so in the past we got the Apolo program. Today we have constant failures by spaceX being touted as successful missions with about 10billion in public funding being evaporated. Now, it’s more important that private business sells you on some bs hype train to rake in funds till they drop the next hype train without realizing their earlier goal and distracting you about it with leaks about hype train 3.
Where are the fully reusable falcon 9s? That second stage is still not reusable, the crew capsule will never be landing without parachutes now, and they still take about the same amount of time to turn around that the space shuttle did. SpaceX is objectively a failure, selling the next big thing as a means to hide what did not come to fruition. If you honestly think the new rocket is gonna be flying in under a decade or before spaceX goes bankrupt. You’re an idiot.
Wasn’t it a tunnel and a bridge? Thought they got 2 of the 3 with the last route having different gauge rails which still fucks with the logistics.
I do wonder what atheists experience when they trip…
I see patterns and have hallucinations when tripping. I’ve seen doritos logos cover my wall and noticed the patterns in mountains. No you are not talking to God, you are having essentially a waking dream and I don’t attribute dreams, which are your subconscious trying to interpret your daily actions, to supernatural beings. That would be stupid.
A shared university toilet can still be part of a house or low pressure system. I’ve yet to see public restrooms which had a lid for the toilet itself, outside of low pressure toilets in communal housing. If you can link to where they clarified the shared university toilet was high pressure, I will stand corrected.
For reducing visible particles, not the nano particles which have a higher concentration. Regardless it’s all kinda moot as neither produce levels of bacteria that could realistically get you sick unless you stick your face above the bowl or to the side openings by the lid while flushing and that person has an infecfion. Just wanted to clarify the science behind it.
Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.