Friendly Ace Lobster 💜

  • 6 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • For pdf export, you can just org-export-to-pdf. In the background it translates your doc to a latex file and then compiles that (I know you stated you didn’t lile tex, but in case you can bear a few command this is actually super useful as it gives you more control over the doc, you can just insert random latex part in your doc and it will handle them nicely). Same for publishers. You can just translate your file to tex and that will fit most of the publication processes. Otherwise you can just convert your doc to pretty much anything with pandoc (including .docx).

    Keep in mind however that this is basically just saying: I like the idea of latex (fine granularity at compile time, raw text and reproducibility) but I prefer org markup for common marks like headers, bold and refs, and I like having a somewhat pretty editor. If your issue with latex is that writting and formating are not synchronous, than yeah this is not for you.


  • Depends on what you’re looking for. If you’re deadset on wysiwyg editors, then yeah, onlyoffice is as good as it gets if you want to keep it foss and don’t like libreoffice. Otherwise people seem to like the many scientific markdown editors. But honestly if you already know emacs then just… emacs. I’m in academia too and with the right set of packages it can fit an academic workflow pretty nicely. I write in org mode with org-superstar, olivetti mode to center text in org, varying fonts and font size for headers, citar for references (that syncs with a realtime bibtex export from my zotero library). With the added bonus of having all the usual goodness (magit, projectile, you name it).





  • The proposal explicitly goes against “more fingerprinting”, which is maybe the one area where they are honest. So I do think that it’s not about more data collection, at least not directly. The token is generated locally on the user’s machine and it’s supposedly the only thing that need to be shared. So the website’s vendor do get potentially some infos (in effect: that you pass the test used to verify your client), but I don’t think that it’s the major point.
    What you’re describing is the status quo today. Websites try to run invasive scripts to get as much info about you as they can, and if you try to derail that, they deem that you aren’t human, and they throw you a captcha.
    Right now though, you can absolutely configure your browser to lie at every step about who you are.
    I think that the proposal has much less to do with direct data collection (there’s better way to do that) than it has to do with control over the content-delivery chain.
    If google gets its way, it would effectively switch control over how you access the web from you to them. This enables all the stuff that people have been talking about in the comment: the end of edge case browser and operating systems, the prevention of add blocking (and with it indeed, the extension of data collection), the consolidation of chrome’s dominant position, etc.



  • As other have pointed out, it goes way beyond ad-blocking. It’s a complete reversal of the trust model, and is basically DRM for your OS:
    Right now, websites assume rightfully that clients can’t be trusted. Any security measure happens on the server side, with the rationale that the user has control over the client and you as a dev control the server. If your security is worth two cents, you secure server side. This change propose to extend vendor power, by defining a set of rule about what they deem acceptable as a client app, and enforcing it through a token system. It gives way too much power to the vendor, who gets to dictate what you can do on your machine.
    We actually have a live experience of how that could go down with safetynet on android. Instead of doubling down on the biggest security issue there (OEM that refuses to support their software for more than 1 or 2 year after release which, quite frankly, should be universally considered as unacceptable), google decided that OEMs should be allowed way more trust than the user. Therefore modifying your own OS in any way, even if it’s ripe with security flaws to begin with and you’re just trying to fix that, breaks safetynet. If you break safetynet, “critical apps” like banking apps stop working altogether.
    The worst part is that there are ways to circonvent safetynet breakage, because in the end, if DRM taught us anything, it is that if you control the client and know your way around, with enough work you can do pretty much anything you want with it. So bad actors are certainly not kept at bay, you just unjustly annoy people with legitmate usecases or even just experimenting with their hardware because in the end, you consider that your user are at best dumb security flaws, at worst huge cash machine, often both at the same time.