austria is gonna burn before we give up falco, lol
austria is gonna burn before we give up falco, lol
not true; that’s a developer thing, not steam itself.
steam offers it as an option, it doesn’t force developers to use it.
plenty of games bought on steam can be run entirely without steam.
aber “DU”, da geht noch was!
not really, highly depends on the game… definitely worth checking beforehand though!
haven’t run into any problems so far, but that doesn’t mean that it can’t trigger anti-cheats
FYI, for anyone interested in fixing this kind of bs:
there’s a program calle WeMod that easily fixes this kind of thing.
it’s basically an automated trainer platform that let’s you cheat in games with 0 prerequisites, know-how, or effort.
highly recommended for stuff like assassin’s creed, far cry, and similar games with bullshit grind.
setting xp/dmg/resources to something like 2 or 3X literally makes the game playable again!
(probably collects a ton of telemetry, which I don’t care about on my gaming system…)
yeah, no.
thing is: YT/google/the data kraken knows you regardless of wether or not you’re logged in.
they track everything from IP, to location (even just approximate based on IP), screen size, browser, OS, and sooo much more.
being logged in makes it easier to track you within a site, but you get tracked regardless.
Orconomics (Dark Profit Saga, Trilogy) for the exact same reason!
excellent fun to read, incredibly funny!
japan is also kinda fucking itself over twice:
so, yeah, double fucked!
this is not true.
it entirely depends on the specific application.
there is no OS-level, standardized, dynamic allocation of RAM (definitely not on windows, i assume it’s the same for OSX).
this is because most programming languages handle RAM allocation within the individual program, so the OS can’t allocate RAM however it wants.
the OS could put processes to “sleep”, but that’s basically just the previously mentioned swap memory and leads to HD degradation and poor performance/hiccups, which is why it’s not used much…
so, no.
RAM is usually NOT dynamically allocated by the OS.
it CAN be dynamically allocated by individual programs, IF they are written in a way that supports dynamic allocation of RAM, which some languages do well, others not so much…
it’s certainly not universally true.
also, what you describe when saying:
Any modern OS will allocate RAM as necessary. If another application needs, it will allocate some to it.
…is literally swap. that’s exactly what the previous user said.
and swap is not the same as “allocating RAM when a program needs it”, instead it’s the OS going “oh shit! I’m out of RAM and need more NOW, or I’m going to crash! better be safe and steal some memory from disk!”
what happens is:
the OS runs out of RAM and needs more, so it marks a portion of the next best HD as swap-RAM and starts using that instead.
HDs are not built for this use case, so whichever processes use the swap space become slooooooow and responsiveness suffers greatly.
on top of that, memory of any kind is built for a certain amount of read/write operations. this is also considered the “lifespan” of a memory component.
RAM is built for a LOT of (very fast) R/W operations.
hard drives are NOT built for that.
RAM has at least an order of magnitude more R/W ops going on than a hard drive, so when a computer uses swap excessively, instead of as very last resort as intended, it leads to a vastly shortened lifespan of the disk.
for an example of a VERY stupid, VERY poor implementation of this behavior, look up the apple M1’s rapid SSD degradation.
short summary:
apple only put 8GB of RAM into the first gen M1’s, which made the OS use swap memory almost continuously, which wore out the hard drive MUCH faster than expected.
…and since the HD is soldered onto the Mainboard, that completely bricks the device in about half a year/year, depending on usage.
TL;DR: you’re categorically and objectively wrong about this. sorry :/
hope you found this explanation helpful tho!
“debunking” requires a source… otherwise they just put forth a claim
Tell me if those companies that experienced more productivity, why did they not continue to implement it?
they did. 80% of them did exactly that!
it works exactly as expected, and the companies that did switch to a 32h-week model did see increased productivity, and 80% chose to keep the 32h-week model.
read the study.
wrsl damit die direkte übersetzung ins englische einfacher zu verstehen ist
got curious, googled it, here’s something interesting:
https://news.usask.ca/articles/research/2018/u-of-s-study-hones-in-on-causes-of-ms-disability.php
seems genetic. which makes sense.
apparently that region just got unlucky with its gene pool, though, as the news release states: more research is necessary in order to be certain.
being caused by environmental chemicals hasn’t been definitively ruled out, but it’s not looking likely
(btw, bravo on an actually readable press release by a university!)
Meaning what?
meaning the models training data is what lets you work around or improve on that bias. without the training data, that’s (borderline) impossible. so in order to tweak models and further development, you need to know what exactly went into the model, or you’ll spend a lot of wasted time guessing around.
I omitted requirements on freely sharing it as implied, but otherwise?
you disregarded half of what makes an AI model. the half that actually results in a working model. without the training data, you’d only have some code that does…something.
and that something is entirely dependent on the training data!
so it’s essential, not optional, for any kind of “open source” AI, because without it you’re working with a black box. which is by definition NOT open source.
all models carry bias (see recent gemini headlines for an extreme example), and what exactly those are can range from important to extremely important, depending on the use case!
it’s also important if you want to iterate on a model: if you use the same data set and train the model slightly differently, you could end up with entirely different models!
these are just 2 examples, there’s many more.
also, you are thinking of LLMs, which is just one kind of model. this legislation applies to all AI models, not just LLMs!
(and your definition of open source is…unique.)
so you’re basically saying it talked itself squarely into uncanny valley?
i honestly didn’t consider that would be an issue for LLMs, but in hindsight…yeah, that’s gonna be a problem…
I’d say it’s worth a watch!
kinda gives a different perspective on the world it’s set in (until the end, where it gets very “the boys”-like, but not in a bad way)
Patch 6 only really broke the script extender (well hotfix #18 really)
nvm, just checked, it’s up to date! (hotfix #19 is supported!)
pretty much everything is up to date!
pretty sure ATM9 recommended minimum RAM is 10GB…i have it at 12GB.
but i also run it at about 100fps and view distance set around 16 with shaders…
this is exactly, and i cannot stress enough just how exactly, the plot of “Don’t look up”