Which protocol or open standard do you like or wish was more popular?
from cyclohexane@lemmy.ml to programming@programming.dev on 02 Sep 2024 19:12
https://lemmy.ml/post/19862386
from cyclohexane@lemmy.ml to programming@programming.dev on 02 Sep 2024 19:12
https://lemmy.ml/post/19862386
There are a couple I have in mind. Like many techies, I am a huge fan of RSS for content distribution and XMPP for federated communication.
The really niche one I like is S-expressions as a data format and configuration in place of json, yaml, toml, etc.
I am a big fan of Plaintext formats, although I wish markdown had a few more features like tables.
threaded - newest
ActivityPub :) People spend an incredible amount of time on social media—whether it be Facebook, Instagram, Twitter/X, TikTok, and YouTube—so it’d be nice to liberate that.
I mean, you’re in the right place to advocate for that 😜
cuelang.org. I deal with a lot of k8s at work, and I’ve grown to hate YAML for complex configuration. The extra guardrails that Cue provides are hugely helpful for large projects.
Hmm, what alternative? XML :-)? People hate Grade DSL just for not being xml
Oh, this looks great!
I’ve been struggling between customize and helm. Neither seem to make k8s easier to work with.
I have to try cuelang now. Something sensible without significant whitespace that confuses editors, variables without templating.
I’ll have to see how it holds up with my projects
Oh this! YAML was a terrible choice. And that’s coming from someone who likes Python and prefers white spaces over brackets. YAML never clicked for me.
noyaml.com
What you mean you can’t easily tell what this is?
The term open-standard does not cut it. People should start using “publicly available and sharable” instead (maybe there is a better name for it).
ISO standards for example are technically “open”. But how relevant is that to a curious individual developer when anything you need to implement would require access to multiple “open” standards, each coming with a (monetary) price, with some extra shenanigans ^[archived]^ on top.
IETF standards however are actually truly open, as in publicly available and sharable.
how about FOSS, free and open-source standards /s
why do we call standards open when they require people to pay for access to the documents? to me that does not sound open at all
Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.
There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.
Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.
It’s a historical quirk of the industry. This stuff came around before Open Source Software and the OSI definition was ever a thing.
10BASE5 ethernet was an open standard from the IEEE. If you were implementing it, you were almost certainly an engineer at a hardware manufacturing company that made NICs or hubs or something. If it was $1,000 to purchase the standard, that’s OK, your company buys that as the cost of entering the market. This stuff was well out of reach of amateurs at the time, anyway.
It wasn’t like, say, DECnet, which began as a DEC project for use only in their own systems (but later did open up).
And then you have things like “The Open Group”, which controls X11 and the Unix trademark. They are not particularly open by today’s standards, but they were at the time.
datatracker.ietf.org/doc/html/rfc2549
;)
Alright, but seriously: IPv6.
datatracker.ietf.org/doc/html/rfc6214
ISO 216 paper sizes work like this: www.printed.com/blog/paper-size-guide/
It’s so fucking neat and intuitive! How is it not used more???
sorry to tell you this bud…
<img alt="map of which countries use iso 216. guess which one just had to be different" src="https://feddit.nu/pictrs/image/5aa9d981-9685-4ab5-ac8c-343f30000c7e.jpeg">
It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI equivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.
For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.
In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.
Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.
TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.
My printer will print and scan any A side paper. But I can’t even buy A paper! Fucking America
Clearly the rest of the world are communists! It’s not us, it’s you! I’m not crying you’re crying! 😭😭😭
Also, A4 simply has a better ratio than letter. Letter is too wide, making A4 better to hold and it fits more lines per page.
Presumably you could just buy that paper size? They’re pretty similar sizes; printers all support both sizes. I’ve never had an issue printing a US Letter sized PDF (which I assume I have done).
Kind of weird that you guys stick to US Letter when switching would be zero effort. I guess to be fair there aren’t really any practical benefits either.
I’ve literally never even seen A paper in America. Probably would have to special order it from another country
Ah fair enough.
I mean I’d love to use it. Of course America is behind the times of civilized nations.
Most preschool kids know what an A4 sheet is. Not sure how it can be used more.
Is ipfs usage growing? Stagnant? No idea… Diatributed serving of content seems great
I never really quite understood IPFS and why it gets used where I see it today. What problem is it solving?
file sharing between planets, obviously /s
IPFS would replace Content Delivery Networks in present day.
It would also allow you to host software and other content from your own network again without the constraints modern Internet Service Providers pose on you to limit your self-hosting capabilities.
If applications are built for it, it could serve as live storage for your applications too.
We ran ipf-search. In one of the experiments we could show that a distributed search index on ipfs-search, accessible through JavaScript is likely feasible with the necessary research. Parts of the index would automatically be hosted by clients who used the index thus creating a fairly resilient system.
Too bad IPFS couldn’t get over the technical hurdles of limiting connection setup time. We could get a fast (ElasticSearch based) index running and hosted over common web technologies, but fetching content from IPFS directly was generally rather slow.
Would you be interested in a similar protocol that supports more things (and is IMO easier to set up)?
I’m not actively looking but please do share references! Other people may read this and they may want to know too. Perhaps I’ll jump back in the rabbit hole at some point too 😁
Okay here it goes!
Tenfingers sharing protocol & python implementation (your python needs cryptodomex, or use the frozen executables).
tenfingers.org
You share theirs, they share yours (all encrypted)! So no benevolent nodes or crypto and it’s 100% decentralised.
I’m working on a better documentation on how to set it up (just forward a port and run setup basically).
I had to read the overview and it looks nice. It reads like IPFS without some of the challenging cruft. Well written!
IPFS seemingly works small scale but not large scale. What makes tenfingers handle millions of files and petabytes of data better than IPFS? Perhaps that is not the goal. In what way do you think the tech scales? Why will discovery of the node which has the data be short?
I want to ask for benchmarks but you can’t do a full benchmark without loads of resources.
Thanks!
IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.
That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).
Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).
The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.
With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.
This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).
To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.
Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).
I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.
What do you think, am I missing something important? Or of course if you have other questions please do ask!
Also, sorry I’m writing this on my mobile so it’s not very well written.
Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.
Yeah it’s basically a benevolent-store-static-data, where static is you cannot change it (or you have to upload new data and make a new link to it).
Cool name though.
ISO 8601 date format. Not because it’s from a standards body, but because it’s simple, sensible, clearly defined, easy to recognize, and very effective.
Date field placement in any order other than most-significant-digits-first is not only counterintuitive, but needlessly complicated to work with. Omitting critical information like the century is ambiguous and confusing.
We don’t live in isolated villages any more. Mixing and matching those problems by accepting all the world’s various regional and personal date styles, especially with no reliable indication of which ones apply in any given case, leads to the hodgepodge of error-prone date madness that we have today.
The 2024-09-02 format should be taught in schools and required in official documents. Let the antiquated date styles fall into disuse outside of art and personal correspondence, like cursive writing.
I had the fortune of being hired to build up from zero my department, and one of the first “rules” I made was all dates are ISO-8601 and now every process runs with 8601, if you use anything different your code is going to fail eventually when it finds another column date in 8601.
And it can be sorted alphabetically in all software. That’s a pretty big advantage when handling files on a computer
I love this standard. If you dig deeper into it, the standard also covers a way to express intervals and periods. E.g. “P1Y2M10DT2H30M” represents one year, 2 months, 10 days, 2 hours and 30 mins.
I recall once using the standard when writing a cron-style scheduler.
I also like the POSIX “seconds since 1970” standard, but I feel that should only be used in RAM when performing operations (time differences in timers etc.). It irks me when it’s used for serialising to text/JSON/XML/CSV.
Also: Does Excel recognise a full ISO8601 timestamp yet?
I’ve seen bugs where programmers tried to represent date in epoch time in seconds or milliseconds in json. So something like “pay date” would be presented by a timestamp, and would get off-by-one errors because whatever time library the programmer was using would do time zone conversions on a timestamp then truncate the date portion.
If the programmer used ISO 8601 style formatting, I don’t think they would have included the timepart and the bug could have been avoided.
Use dates when you need dates and timestamps when you need timestamps!
Thats an issue with the time library, not with timestamps. Actually timestamps are always in UTC, you need to do the conversion to your local time when displaying the value. There should be no possible off-by-one errors, unless you are doing something really wrong.
RFC 3339 is a simplified profile of 8601 that only covers YYYY-MM-DD style formatting, if you only ever use that format and avoid the things like “2024-W36” they’re mostly interchangeable.
The week-of-year is far more relevant in Western Europe, and is used quite a bit in business. I have a Junghans watch that has a week complication.
It’s an important format outside of the US, and gives ISO-8601 an edge as a standard of conformance.
Some countries already use it officially too :)
The year is the information that most of the time is the least significant in a date, in day to day use.
DDMMYY is perfect for daily usage.
Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.
And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.
For any scheduled date it is irrelevant if you miss it for a day, a month or a year. So from that perspective every part of it is exactly the same, if the date is wrong then it is wrong. You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024? That may be relevant for historical or scientific purposes but not much people need that kind of precision. Most people use calendars for stuff days or month ahead or below, not years or decades.
If I get my tax bill, I don’t care for the year in the date because I know that the government wants the money this year not next or on ten. If I have a job interview, I don’t care for the year, the day and months is what is relevant. It has a reason why the year is often removed completely when dates are noted or made. Because it Is obvious.
Yes I can see why YYYY-MM-DD is nice for stuff like archiving purposes, it makes sorting and grouping very easy but there they already use the best system for the job.
For digital documents I would say that date and time information should be stored in a defined computer readable standard so that the document viewer can render or use it in any way needed. That could be swatch internet time as far as I care because hopefully I would never look at the raw data at all.
Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.
And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.
Ok, then I am sure we will all be using that very soon, because abstract mathematic definitions always map perfectly onto real world usage and needs.
It is not that I don’t follow the mathematic definition of significance, it is just invalid for the view and scope of the argument that I make.
YYYY-MM-DD is great for official documents but not for common use. People will always trade precision for ease of use, and that will never change. And in most cases the year is not relevant at all so people will omit it. Other big issue: People tend to write like they talk and (as far as I know) nobody says the year first. That’s exactly why we have DD-MM and MM-DD
YYYY-MM-DD will only work in enforced environments like official documents or workspaces, because everywhere else people will use shortcuts. And even the best mathematic definition of the world will not change that.
Except that DDMMYY has the huge ambiguity issue of people potentially interpreting it as MMDDYY. And it’s not straight sortable.
My team switched to using YYYY-MM-DD in all our inner communication and documents. The “daily date use” is not the issue you think it is.
Yes and YYYY-MM-DD can potentially be interpreted as YYYY-DD-MM. So that is an zero argument.
I never said that the date format should never used, just that significants is a arbitrary value, what significant means depends on the context. If YYYY-MM-DD would be so great in everyday use then more or even most people would use it, because people, in general, tend to do things that make their life easier.
There is no superior date format, there are just date format that are better for specific use cases.
That is great for your team, but I don’t think that your team has a size large enough to have any kind of statistically relevance at all. So it is a great example for a specific use case but not an argument for general use at all.
No country uses “year day month” ordered dates as standard. "Month day year, " on the other hand, has huge use. It’s the conventions that cause the potential for ambiguity and confusion.
Entire countries, like China, Japan, Korea, etc., use YYYY-MM-DD as their date standard already.
My point was that once you adjust, it actually isn’t painful to use as it first appears it could be, and has great advantages. I didn’t say there wasn’t an adjustment hurdle that many people would bawk at.
…wikipedia.org/…/List_of_date_formats_by_country
And every person in those countries uses YYYY-MM-DD always in their day to day communication? I really doubt that. I am sure even in those countries most people will still use short forms in different formats.
Yes, and their shorthand versions, like writing 9/4, have the same problem of being ambiguous.
You keep missing the point and moving the goal posts, so I’ll just politely exit here and wish you well. Peace.
I never moved the goalposts, all I always said was that a forced and clunky date format like YYYY-MM-DD will never find broad use or acceptance in the major population of the world. It is not made for easy day to day use.
If it sounded like I moved goalposts, that maybe due to english as a second language. Sorry for that.
But yes, I think we both have made our positions and statements clear, and there is not really a common ground for us. Not because one of us would be right or wrong but because we are not talking about the topic on the same level of abstraction. I talk about it from a social, very down to the ground perspective and you are at least 2 levels of abstraction above that. Nothing wrong with that but we just don’t see the same picture.
And yes using YYYY-MM-DD would be great, I don’t say anything against that on a general level, I just don’t ever see any chance for it used commonly.
So thank you for the great discussion and have a nice day.
I arrived to manage releases in a company, the previous manager named releases as “release04092016”, as USA standard. My first recommendation was to name releases as “releaseyyyymmdd” so “release20160409”. I was asked by another manager why to change that, so I showed her a sorted list of releases “git branches” and asked her, can you tell me there when was the last release? (a very common question) Of course, to find the last release you need to check the whole list because the mmddyyyy order is useless. The answer with yyyymmdd was immediate, just look at the last row.
For the newbies: RFC 3339 vs ISO 8601. Bookmark this site.
That looks like an interesting diagram, but the text in it renders too small to read easily on the screen I’m using, and trying to open it leads to a javascript complaint and a redirect that activates before I can click to allow javascript. If it’s yours, you might want to look in to that.
The table below works, though. Thanks for the link.
Alas it’s not my site (and I think it’s meant to be read on a desktop screen), so I can’t fix it.
7 digit years feels way to optimistic, but I’ll be rooting for us.
Also, you can sort by ascending file names
i’m a plan 9 from bell labs fan. Imagine how excited I was when wsl used 9P for its plumbing. then they scrapped it all for wsl2.
just, the power they managed to get out of those union mounts… your application wants access to the mouse? sure, here’s a file named “mouse”. it’s got the coordinates in it. you want to draw to the screen? here’s a file called like “bitmap” or whatever, just write to it. you want to start a process on another machine? just cd to it and start the process there. want to have the UI show up on your machine? symlink your bitmap file to that directory.
I also wish early web composability could have stayed and expanded. like, the old vlc embed player, which would just show up in your browser and could play any file inline? great stuff. Imagine if every application composed with everything else, like the android Activity and Intent concepts but for anything, just by virtue of living in the same os. need an image? just ask the os and it will present the user with many ways to procure an image, let the selected one run , and hand you back an image. you don’t even have to care where from. in a way, it’s what the arcan guy is doing with his experiments, although that’s more for stitching together graphical pipelines.
Plan 9 even extended the “everything is a file” philosophy to networking, unlike everybody else that used sockets instead.
Are sockets not files?
They’re “file like” in the sense that they’re exposed as an
fd
, but they’re not exposed via the filesystem at all (Unlike e.g. unix sockets), and the existing API is just mapped over the sockets one (i.e.write()
instead ofsend()
,read()
instead ofrecv()
). There’s also a difference in how you create them, youopen()
a file, butconnect()
a socket, etc.(As an aside, it turns out Bash has its own virtual file-based wrapper around sockets, so you can do things like
cat
a remote port with Bash, something you can do natively in Plan 9)Really it just shows that “everything is a file” didn’t stand up in practice, there’s more stuff that needs special treatment than doesn’t (e.g. Interacting with TTYs also has special APIs). It makes more sense to have a better dedicated API than a generic catch-all one.
It’s completely bonkers that JPEG-XL is as good as it is and no one wants to actually implement it into web browsers
What’s so good about it?
Basically smaller file sizes than JPEG at the same quality and it also automatically loads a lower quality version of the image before it loads a higher quality version instead of loading it pixel by pixel like an image would normally load. Google refuses to implement this tech into Chrome because they have their own avif format, which isn’t bad but significantly outclassed by JPEG-XL in nearly every conceivable metric. Mozilla also isn’t putting JPEG-XL into Firefox for whatever reason. If you want more detail, here’s an eight minute video about it.
I’m under the impression that there’s two reasons we don’t have it in chromium yet:
Google already wrote the wuffs language which is specifically designed to handle formats in a fast and safe way but it looks like it only has one dedicated maintainer which means it’s still stuck on a bus factor of 1.
Honestly, Google or Microsoft should just make a team to work on a jpg-xl library in wuffs while adobe should make a team to work on a jpg-xl library in rust/zig.
That way everyone will be happy, we will have two solid implementations, and they’ll both be made focussing on their own features/extensions first so we’ll all have a choice among libraries for different needs (e.g. browser lib focusing on fast decode, creative suite lib for optimised encode).
didn’t google include jpeg-xl support already in developer versions of chromium, just to remove it later?
Chromium had it behind a flag for a while, but if there were security or serious enough performance concerns then it would make sense to remove it and wait for the jpeg-xl encoder/decoder situation to change.
It baffles me that someone large enough hasn’t gone out of their way to make a decoder for chromium.
The video streaming services have done a lot of work to switch users to better formats to reduce their own costs.
If a CDN doesn’t add it to chromium within the next 3 years, I’ll be seriously questioning their judgement.
Adobe announced they were supporting it (in Camera Raw), that’s when the Chrome team announced they were removing it (due to a “lack of industry interest”)
It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.
This is why I fucking love the internet.
I mean, I’ll never take the time to get this knowledgable about image formats, but I am ABSOLUTELY fuckdamn thrilled that at least SOMEONE out there takes it seriously.
Good on you, pixel king
Funny thing is, there was talk on the Chrome bug tracker of using just this ability transparently at the HTTP layer (like gzip/brotli compression), but they’re so set on pushing their AVIF format that they backed away from it.
Someone made a fair point that having a format being both lossy and lossless is not necessarily a great idea. If you download a jpeg file you know it will be compressed, if you download png it will be lossless. Shifting through jxl files to check if it’s lossy or not doesn’t sound very fun.
All in all I’m a big supporter of jxl though, it’s one of the only github repos I actively follow.
While I agree that it’s somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it’s really not a big deal compared to the present situation with jpg/png.
The reason being that if you download a png file you have no idea if its been converted from jpg, if it’s a screenshot of a jpg, or if it’s been subjected to lossy reencoding by a tool or a website upload process.
The only thing you can really do to try and see if the file you’ve downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You’d do the exact same thing with a jxl file.
Functionally speaking, I don’t see this as a significant issue.
JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.
Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.
You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.
Adobe is backing the format, Apple support is coming along, and there are rumors that Apple is switching from HEIC to JPEG XL as a capture format as early as the iPhone 16 coming out in a few weeks. As soon as we have a full blown workflow that can take images from camera to post processing to publishing in JXL, we might see a pretty strong push for adoption at the user side (browsers, websites, chat programs, social media apps and sites, etc.).
Do you know QOI format ? I would appreciate your opinion about it.
QOI is just a format that’s easy for a programmer to get their head around.
It’s not designed for everyday use and hardware optimization like jpeg-xl is.
You’re most likely to see QOI in homebrewed game engines.
To be honest, no. I mainly know about JPEG XL only because I’m acutely aware of the limitations of standard JPEG for both photography and high resolution scanned documents, where noise and real world messiness cause all sorts of problems. Something like QOI seems ideal for synthetic images, which I don’t work with a lot, and wouldn’t know the limitations of PNG as well.
I think I would feel better using JPEG-XL where I currently use WebP. Here’s hoping for wider support.
Good news! I believe the Ladybird Browser intends to include support for JPEG XL.
PGP or GPG, however you spell it. You can encrypt stuff, protect your email from prying eyes!
Also FOSS in general.
Huge fan of PHP…I mean PGP, oh god auto correct, you scary 😳
The tooling around it needs to be brought up to snuff. It seems like it hasn’t evolved much in the last 20+ years.
I had a small team make an attempt to use it at work. Our conclusion was that it was too clunky. Email plugins would fool you into thinking it was encrypted when it wasn’t. When it did encrypt, the result wasn’t consistently readable by plugins on the receiving end. The most consistent method was to write a plaintext doc, encrypt it, and attach the encrypted version to the email. Also, key servers are setup by amateurs who maintain them in their spare time, and aren’t very reliable.
One of the more useful things we could do is have developers sign their git commits. GitHub can verify the signature using a similar setup to SSH keys.
It’s also possible to use TLS in a web of trust way, but the tooling around it doesn’t make it easy.
The semantic web and social linked data. We could have applications share data without depending on big tech, but rather based on application standards.
It can be used today and gains traction but I wouldn’t mind it going faster. Especially the interoperable personal app space could use some love and attention.
Like with the Solid Project ?
solidproject.org
Exactly. The Semantic Web is broader than Solid but Solid is great for personal apps.
Say you buy a smartphone. The specifications of the smartphone likely belong elsewhere than in a Solid Personal Online Datastore, but they can be pulled in from semantic data on the product website. Your own proof of purchase is a great candidate for a Solid POD, as is the trace of any repairs made to it.
These technologies are great to cross the barriers between applications. If we’d embrace this, it would be trivial to find the screen protector matching your exact smartphone because we’d have an identifier to discover its type and specifications. Heck, any product search would be easier if you could combine sources and compare with what you already have.
The sharing tech exists. Building apps works also. Interpreting the information without building a dedicated interface seems lacking for laymen.
IPv6. Stop engineering IoT junk on single-stack IPv4, you dipshits.
Ogg Opus. It’s superior to everything in every way. It’s free and there is absolutely no reason to not support it. It blows my mind that MPEG 1.0 Layer III is still so dominant.
Count the number of devices in use today that will never support Opus, and it might not blow your mind any longer. Also, AFAIK, the reference implementation still doesn’t implement full functionality on hardware that lacks a floating point unit.
These things take time.
I remember using Xiph’s integer implementation of Ogg Vorbis on my Nokia N-Gage (Symbian S60). I wonder if it’s not a priority for Opus. IIRC, Opus is floats all the way down.
update: it exists.
wiki.xiph.org/OpusFAQ#Is_there_a_fixed-point_impl…?
I remember trying to understand Vorbis fixed point codebase, it was completely bonkers, the three of us on this task couldn’t even draw a rough control flow diagram.
Amen
Out of curiosity, why ogg as opposed to other containers? What advantages does it have?
Definitely agree on the Opus part, but I am very ignorant on the ogg container.
Large ISPs still don’t support it. It’s a fucking travesty.
Love, love, opus. It’s a fantastic format.
I setup my opnsense firewall for IPv6 recently with Spectrum as an ISP. I followed this howto from The Other Site:
reddit.com/…/psa_howto_ipv6_on_spectrum_formerly_…
Even as someone who has a background in networking, I’d have no idea how to figure some of that stuff out on my own (besides reading a whole lot and trying shit that will probably break my network for a weekend). And whatever else you might say about Spectrum, they have one of the saner ways to implement it; no 6to4 or PPPoEv6 or any of that nonsense.
I did set the config for a /54, but Spectrum still gave me a /64. Which you can’t subnet in IPv6. Boo.
Oh, and I’m not 100% sure if the prefix is static or not. There’s no good reason that it should change, except to make self-hosting more difficult, but I have a feeling I’ll see it change at some point.
So basically, if this is confusing and limiting for power users, how are average home users supposed to do it?
There are some standardization things that could make things easier, but ISPs seem to be doing everything they can to make this as painful as possible. Which is to their own detriment. Sticking to IPv4 makes their networks more expensive, less reliable, and slower.
JSON5. it’s basically just JSON with several QoL improvements, like comments, that make it usable as a format for human consumption (as opposed to a serialization format).
TIL this exists
I just came.
TMI
I love that there’s someone out there who’s that passionate about JSON.
I hate grammers in anything that don’t support trailing commas. It’s even worse when it’s supported in some contexts and not others. Like lists are OK, but not function parameters.
NNTPS
I wish there was a good open standard for task management or todo list.
I know there’s todo.txt, but it lacks features like dependent tasks, and overall the plain text format limits features and implementations.
I think CalDAV (which uses the iCalendar format) may be the closest thing. It covers calendar items, obviously, but also task and journal items.
Do you know if it allows dependent tasks?
Yes, but not all clients expose dependent tasks (which is sadly a common issue with open standards: they aren’t always properly implemented). I’m using Tasks.org on my phone (which supports dependent tasks), synchronizing to a Nextcloud server with the Tasks app (which supports dependent tasks now,
but didn’t for a long time), which also syncs to Thunderbird (which does not appear to show dependent tasks as dependents).Edit: remembered that the Nextcloud Tasks app has long supported dependent tasks. I was thinking of recurring tasks, which it does not support. Again, open standards aren’t always fully implemented.
Well that’s still good news that I didn’t expect! I suppose I will look into that then. Thank you!
Not sure if it counts, but the terminal world being a place where many applications do so many different things but are interoperable, is amazing. I guess that would be the POSIX standard?
The metric system, f*ck the imperial system. Every scientist sticks to the metric system, and why are people even still having an imperial system, with outdated measurements like stones for weight blows my mind.
Also f*ck Fahrenheit, we have Celsius and Kalvin for that, we don’t need another hard to convert temperature measurement.
I’ll fight you on fahrenheit. It’s very good for weather reporting. 0° being “very cold” and 100° being “very hot” is intuitive.
0 degrees Celsius, the water is freezing, 100 degrees Celsius, the water is boiling. Celsius has a direct link to Kelvin, and Kelvin is the SI unit for measurement temperatures.
What do I care about water? I’m not dressing water for the weather, I’m dressing me.
Are you not made primarily of water?
Do people actually look at the temperature to deterimne how to dress? For me a calendar is far more useful. And i can stick my head out the door to confirm if i need an extra layer…
Asterisk: At 1 atmosphere of pressure. Lots of people forget that part.
Knowing whether it may snow or rain depending on whether you are below or above 0 is very useful though. 0 and 100 are only intuitive because you’re used to those numbers. -20 bring very cold and 40 being very hot is just as easy.
As someone who’s not used to Fahrenheit I can tell you there’s nothing intuitive about it. How cold is “very cold” exactly? How hot is “very hot” exactly? Without clear references all the numbers in between are meaningless, which is exactly how I perceive any number in Fahrenfeit. Intuitive means that without knowing I should have an intuitive perception, but really there’s nothing to go on. I guess from your description 50°F should mean it’s comfortable? Does that mean I can go out in shorts and a t-shirt? It all seems guesswork.
About the only useful thing I see is that 100 Fahrenheit is about body temperature. Yeah, that’s about the only nice thing I can say about Fahrenheit. All temperature scales are arbitrary, but since our environment is full of water, one tied to the phase changes of water around the atmospheric pressure the vast majority of people experience just makes more sense.
But when it comes to weather, the boiling point of water is not a meaningful point of reference.
I suppose I’m biased since I grew up in an area where 0-100°F was roughly the actual temperature range over the course of a year. It was newsworthy when we dropped below zero or rose above 100. It was a scale everybody understood intuitively because it aligned with our lived experience.
And whats the difference by using -22c to 40C? Not a nice ratio? If you grew up with Celsius, you wouldnt never felt something is amiss and just feel just as natural.
Ours is around 10°C to 40°C, or 15°C to 30°C depending upon your tolerances, so I guess that’s it.
Well, the freezing point of water is very relevant for weather. If I see that the forecast is -1 degC when it was positive before, I know I will have to watch out for ice on roads.
And the boiling point as the other reference point makes complete sense.
Ask someone in the north of finland how hot is “very hot”, and how cold is very cold. Then ask the same in middle Africa. Spoiler: it will vary alot.
This is strictly untrue for many climates. Where I live in Canada, 0F is average winter day, 100F is record-breaking “I might actually die” levels of heat.
-30C to 30C is not any more complicated or less intuitive than -22F to 86F
For traffic Celsius is more intuitive since temps approaching zero means slippery roads.
You’re long passed that with Fahrenheit. And on a scale from 0 very cold to 100 very hot, 32 doesn’t seem that cold. Until you see the snow outside.
32 isn’t that cold, even if it’s snowing. I do currently live in Minnesota though, so my sense of temperature is much different than someone from somewhere warm.
Minnesotan here. Can confirm that 32 is still long-sleeve shirt weather.
I regularly see people here walking into a store from the parking lot in T-shirts, in 32° weather. Wind chill makes a far greater difference. 38° from wind chill is far colder than 32° with no wind.
That’s probably the reason for this preference.
10°C for me means my PC doesn’t heat up the room enough and I need a heater. 32°F and I will be shoving my feet in the heater.
You are allowed to say fuck here.
Who is Kalvin? Did you mean kelvin?
One drawback of celsius/centigrade is that its degrees are so coarse that weather reports / ambient temperature readings end up either inaccurate or complicated by floating point numbers. I’m on board with using it, but I won’t pretend it’s strictly superior.
A degree Celsius is not coarse and does not require decimals in weather reports, and I suspect only a person who has never lived in a Celsius-using country could make such silly claims.
Consider that even if the difference between 15° and 16°C is not significant to you, it very well might be to other people. (Spoiler: it is.)
Then your suspicions are leading you astray.
They didn’t say a difference of 1K isn’t significant but the difference of 0.1K isn’t.
And since the supposed advantage of Fahrenheit is that it better reflects typical ambient temperatures, we have to consider relevance for average people. Hardly anyone will feel a difference of 0.1K.
That’s why European weather reports usually show full degrees. And also our fridges show full degrees.
What about thermostats for homes? I can absolutely feel a 2 deg F difference
Also whole degrees.edit: no, that’s wrong, there are thermostats that allow 1/10th of degrees (I only have old manual ones). Still, you probably are not able to tell the difference between 20 and 20.1 °C. Humidity is far more relevant.A difference of 2 °F is 1.1 °C…
I use °C and I feel the need to use the places after the decimal. Also, I feel nothing wrong about it.
Also, I use °F for body temperature measurement and need to use the places after the decimal and feel fine with it.
Also, when using °C for body temperature, I still require the same number of decimal places as I require for °F.
I am not saying that °F is not useful, but I am invalidating your argument.
Imperial is used in thermodynamics industries because the calculations work out better.
Since nobody’s brought it up: MQTT.
It got pigeonholed into IoT world, but it’s a pretty decent event pubsub system. It has lots lf security/encryption options, plus a websocket layer, so you can use it anywhere from devices, to mobile, to web.
As of late last year, RabbitMQ started suporting it as a supported server add-on, so it’s easy to use it to create scalable, event-based systems, including for multiuser games.
I spun up a MQTT/Aedes/MongoDB stack on my network recently for some ESP32 sensors.
Fantastic protocol and super easy to work with!
MQTT is great! There are clients available in Python, JS, etc
I’m currently on the ZeroMQ boat. What made you go to Rabbit Mq? I need the Pair socket for zeroMq for a project.
Installed RabbitMQ for use in Python Celery (for task queue and crontab). Was pleasantly surprised it also offered MQTT support.
Was originally planning on using a third-party, commercial combo websocket/push notification service. But between RabbitMQ/MQTT with websockets and Firebase Cloud Messaging, I’m getting all of it: queuing, MQTT pubsub, and cross-platform push, all for free. 🎉
It all runs nicely in Docker and when time to deploy and scale, trust RabbitMQ more since it has solid cluster support.
Problem Details for HTTP APIs - I have to work and integrate with a lot of different APIs and different kinda implementations of error handling. Everyone seems to be inventing their own flavor of returning errors.
My life would be so much easier if everyone just used some ‘global unified’ way to returning errors, all in the same way
That would be nice. I have implemented this in the past but never once encountered an API that used it.
Please guys, stop using line-breaks mid-sentence. It’s not the 90’s anymore, viewers generally can wrap.
<img alt="" src="https://lemmy.ml/pictrs/image/3fcc5d49-9255-4ace-8fa4-127d851458c5.png">
Maybe a bad markdown viewer?
viewergenerator?No, in general the markdown format suggests using line breaks in the middle of paragraphs to make the code just as readable as the output. That’s why two line breaks is what creates a new paragraph. So it’s the viewer showing it incorrectly here.
The screenshot is of the website ietf.org , which doesn’t seem to be markdown.
Best is when the API doesn’t match a PDF and says “500: Internal Error”
Saving…
I made my first API at work last year (still making) and always saw myself looking for input on making a consistent way to return errors, with no useful input from the senior programmers or the API users. This is my second biggest problem, the first being variable and function names of course.
If I were to do anything related to HTTP, I now have something to look at.
XMPP, RSS, …
XMPP is not a good protocol though. There’s a reason nobody uses it anymore.
I think it’s going to be interesting when the EU tries to enforce interoperability between the major messaging platforms. What are they going to do? They have some ridiculous targets like interoperable end-to-end encrypted group video calls in 5 years!
Yeah, Google and Faceebook EEE’d it.
Do elaborate.
XMPP is very old and was created when nobody knew about mobile phones. It worked more like true messaging app less than messages store ( unlike matrix ).
Requirement of permanent tcp ip connection doesn’t work well for mobile + pretty much useful feature in xmpp ( like message history ) is optional. If something doesn’t work in xmpp most people would blame xmpp / jabber rather than the lack of feature support in their server
Seriously? That’s your argument? So is the wheel.
I was under the impression PubSub was created for that.
Still, it’s an open extensible protocol.
They elaborated how that relates; usage scenario changed with mobile phones. XMPP is a bad match.
The X is for extensible, so are a whole bunch of other protocols and people haven’t stopped using them, they get improved upon (for the most part).
The mentioned permanent tcp ip connection (which you don’t neccessarily have on mobile) too?
I was under the impression XEP-0060 solves that.
Sorry, i won’t read that whole thing. But i guess you’re right, in which case i take back what i said.
Seriously, if you do take one verse from the whole response, you get straw men you fighting with.
I just told you that jabber / xmpp was created in the times almost nobody knew or believed mobile phones can be a thing. Thus it got created in that way: many similarities of xmpp and e-mail, irc or icq which didn’t stand the passage of time.
Of course, you’re right xmpp evolved to get PubSub extension as an “optional feature” but because of its availability (or rather lack) - most servers didn’t support it even the client did support, xmpp didn’t win the acceptance of the end-users. It got some attention in the business world (cisco jabber) but not in the retail.
Business cannot work forever without clients willing to pay or at least use, so it died off even in the business.
End of story, try not to fighting with the straw men you created.
That XMPP’s extensibility is in itself a strength and a weakness is indeed a valid argument, as you’ve exemplified. I was expecting you’d criticize OMEMO though…
No, it didn’t die off, it’s still used. IRC is still used as well, probably more or less at the same level. But if you define usage as “used in business” well then probably just a few cases, yes.
I hadn’t heard of Cisco Jabber but i’ve heard of Google and Facebook - both companies’ messengers were, initially, based on XMPP but they EEE’d it once they got enough users and walled their gardens, dealing a major blow to the protocol.
Can i fight my inner daemons at least? Please?
Can you please elaborate this point? I don’t understand what you mean by “true messaging app” and why that would be a bad thing?
Are you sure this is the case? Maybe back in the day, but my understanding is this isn’t true anymore
Why is user choice a bad thing? There’s a wealth of clients that implement the features you want
This may not be an important point, but from my experience, people always blame the client and not the underlying protocol. If I face an issue with my browser, I’d likely blame the browser before I blame http.
I use xmpp. It happens to be a great fit for a private family messaging service. Good interoperability between modern clients. I get that “nobody uses it” is hyperbole, but the internet is a big place and there is room for services without mass market appeal to thrive.
I and many others use it! And Google, meta, etc. Have used it but decided to lock it down.
Yes you’re right, there’s a reason people don’t use it as much, which is because these corporations embraced it, dominated it, then extinguished it.
But XMPP is honestly my favorite comm protocol and the most impressive imo.
For RSS I honestly don’t see a point, at least for me. What’s the use for having update feeds in a unified format when I still have to go to each fucking site to view the full text? I completely see the point of RSS when all I need is in the feed. But I hate going from different UI to different UI to get the full content. I want something like inoreader.com for self-hosting.
The content of the feed depends on the content creator, not on RSS.
I know that. But RSS is like 95% used for news feeds and that’s what I’m talking about. The way RSS is overwhelmingly used is making the whole thing useless (to me).
well, then just consider those giving shitty support for it as if they wouldn’t be supporting it at all
RSS works great for me though.
I have an app on my not-so-smart phone to read news when commuting. It is not a long journey so I just want to have a quick glance at the headlines and read the actual articles that I want to. There are only 6 sites that I am interested, but still will take quite some work to crawl from the proper websites. RSS in turn is unified so I don’t need to worry about their website layouts, formats, etc. It also gives me an URL to the actual content which I can use readability/reader mode library to parse and further reduce unnecessary contents.
Quite the opposite, I hope more informational sites offer/keep RSS! (Some removed RSS typically after a revamp, design change)
Mastodon offers rss for both keywords and users
Miniflux is likely to tick most of your boxes. It’s self hostable and can download the full article without extra clicks / having to visit the source.
Thanks, I’ll take a look. These days Inoreader also shows only the summary, making it useless for me.
This has nothing to do with RSS, it is the author’s choice. It’s like someone who posts links to their articles on Twitter / Facebook / Reddit, same thing. The platform doesn’t prevent you from putting the entire content there, and in fact, many do, especially with RSS.
One benefit of RSS though is that because it is an open protocol, the problem you mention already has solutions, which auto fetch the articles for you. That wouldn’t be possible without an open protocol like RSS
Moreover, I’d argue even with that, RSS is still a huge plus. To have all your content’s headlines in one UI, and potentially you can filter or sort them however you want, that’s pretty awesome.
AV1 video codec !
I’ll add JXL if we’re doing codecs
Depending where you use it, but often tables are available in markdown.
Fixed…cos you could only see rendered and not code.
Oh. Good one. Markdown everywhere. Slack always pissed me off for it’s sub par markdown support.
There is an option in the settings to use markdown formatting. I haven’t tried it but I guess it at least makes formatting less annoying.
It’s a (small, shitty) subset of markdown. Slack formatting just kind of sucks.
Markdown tables are terrible though. Try and put a code block in there. Adoc tables are amazing on the other hand, but much more verbose to write.
I’d argue this syntax is difficult to read, especially as it scales
The syntax is only difficult to read in their example.
I fixed their example here: programming.dev/comment/12087783
I fixed it for you (markdown tables support padding to make them easy to read):
deleted by creator, who realised their misunderstanding
SqLite for office formats.
I’m not quite following this. Can you please elaborate?
I read this somewhere. Since i didn’t find it anymore and don’t remember all the advantages aside from concurrence (don’t have to unpack a zip archive) i asked chatgpt:
I put related points together. One point was moot, removed.
Unicode editors for notes/todo formats, making markup unnecessary.
Does unicode have bold/italics/underline/headings/tables/…etc.? Isn’t that outside of its intended goal? If not, how is markup unnecessary?
Yes, and even 𝓈𝓉𝓊𝒻𝒻 𝕝𝕚𝕜𝕖 🅣🅷🅘🆂. And table lines & edges & co. are even already in ASCII.
🤷<- this emoji has at least 6 color variants and 3 genders.
Because the editor could place a 𝗯𝗼𝗹𝗱 instead of a **bold**, which is a best-case-scenario with markdown support btw. And i just had to escspe the stars, which is a problem that native unicode doesn’t pose.
What about people who prefer to type
**bold**
rather than type a word, highlight it, and find the Bold option in whichever textbox editor they happen to be using?Which is what i ask for, better (or at all) support for unicode character variations, including soft keyboards.
Imagine, there was a switch for bold, cursive, etc on your phone keyboard, why would you want to type markup?
And nobody would take
**bold**
away, if you want to write that.Would you have to do that for every letter? I suppose a “bold-on/bold-off” character combination would be better/easier, and then you could combine multiple styles without multiplying the number of glyphs by some ridiculous number.
Anyways, because markup is already standardized, mostly. Having both unicode and markup would be a nightmare. More complicated markup (bulleted lists, tables) is simpler than it would be in Unicode. And markup is outside of Unicode’s intended purpose, which is to have a collection of every glyph. Styling is separate from glyphs, and has been for a long time, for good reason. Fonts, bold/italics/underline/strikethrough, color, tables and lists, headings, font size, etc. are simply not something Unicode is designed to handle.
Yeah, had the same thought, edited already.
I don’t like that approach. Text search won’t find all the different possible Unicode representations.
You always find some excuses, huh? That would be a bug.
Always? That’s my first reply. Bug of what? A flaired character has a different code than a standard one, so your files would be incompatible with any established tools like find or grep.
My bad, confused you with someone else.
You can always update them.
About s-expressions, what i read about it: web.archive.org/…/s-expressions-the-fat-free-alte…
Seems rather niche, for non-key-value-pair data structures (aren’t no-sql databases good for that?), considering that lightweight markup fulfills that role for readable document source.
The appeal for json and yaml is readability, and partially ease of parsing. I say s-expressions win over both in both aspects.
Can you please expand on your references to no-sql and your reference to “lightweight markup”? I don’t quite understand what you meant there.
S-expressions are basically directly writing the AST a compiler would normally generate. They can be extremely flexible. M-expressions were supposed to be programming part of Lisp, and S-expressions the data part. Lisp programmers noticed that code is just another kind of data to be manipulated and then only used S-expressions.
Logo is arguably a Lisp with M-expressions. But whatever niche Logo had is taken by Python now.
I don’t use XMPP but it seems like such a no-brainer
Not matrix? XMPP is a good idea, but the wildly different levels of support among clients cause problems even back in its heyday Matrix solves some of that, fully encrypted, chat history stored on the server in encrypted form, supports gateways to other services.
Honestly I just haven’t looked at Matrix yet. Unfortunately like many of the privacy-centric protocols it’s mostly used by people trying to hide something.
I don’t know much about “mostly”, but check out the channels on the server kde.org, where they do discussions regarding visual design, development, documentation and all that good stuff.
Sometimes, if you mostly find what you don’t like, you might be looking at it from the wrong angle. For instance, I found a few, very desirable communities on Reddit, so much that I am finding it hard to leave. And that is the few that I searched for. Only realised the toxic communities, when I read others’ rants on it ^[and from the recommendations. Definitely don’t checkout the Reddit recommended communities or you will get said toxic stuff.].
Thanks!
This is really not accurate. Matrix is not designed to be a super privacy first protocol. It’s like Lemmy in the it’s designed to solve a problem and be a useful federated collaboration tool. It borrows features from a number of popular messaging platforms. Message history is stored on the server but encrypted client side so privacy is preserved. It supports group chat rooms. It supports voice and video. And most importantly, it supports bridges- you can connect your matrix to other services that are completely incompatible with matrix using a bridge. Perhaps the best example of this is Beeper, which is built on matrix. They are trying to replicate the user experience of the old app Trillian- beeper can link with a number of chat services including Google messages, slack, WhatsApp, telegram, signal, etc. Thus you get all your chats in one place.
I feel like I would enjoy Beeper but I just cannot get past the name
What’s wrong with Beeper?
Nothing objectively, it just sounds so stupid to me that I have an irrational aversion lmao
(This is not an insult, I just had a realization that I think might affect you)-- do you know what the name comes from?
Years ago there was a thing called a beeper before everyone had cell phones. It was a one way paging system-- you’d give your friends your beeper number, they’d call it, type in their phone number, and their number (or whatever they dialed in) would appear on your beeper. You’d then use a landline phone to call them back (early versions of the system had no text or reply capability, only numbers and only one-way).
I always thought it was a cool name. But thinking about it I realize someone less than maybe 25-30 years old might literally have never encountered such a device. Much like a 5.25" floppy disk or rotary dial phone, they went out of style years ago and a young person might never have encountered one.
Curious if that’s you?
You know, I probably should have looked into this… yeah, this is me lol. I’ve seen floppies and we had an old rotary phone, but I’ve never heard of a beeper. It still sounds weird but at least there’s a reason.
It’s all good. Like I said, no insult at all. There’s no reason why you would ever have encountered a beeper, it’s one of those things that once SMS came around everybody just collectively decided to move on from. Unlike floppies or rotary phones there wasn’t some continued use for it.
Those problems you speak of about XMPP are not really a concern anymore and haven’t been for a while.
Matrix on the other hand is very difficult to implement, and currently there’s only one (maybe two?) viable implementation choices. It is way over complicated, resource intensive, and has privacy issues.
Does it have privacy issues compared to XMPP which doesn’t enforce the privacy extensions? I figure they are about the same there. Asking genuinely as I do not know other than Matrix might leak some metadata.
And quite frankly, I really wish we’d just agree on one or the other. Would love to host an instance and move some people to it but both are just stuck in this quasi-half used/half not state. And even people on here can’t agree what should be “standard.”
Xmpp definitely wins in privacy. What is there to privacy more than message content and metadata? Matrix definitely fails the second one, and is E2E still an issue for public groups? I don’t remember if they fixed that.
XMPP being a protocol built for extensibility means it will be hard for it not to keep up with times.
On your point of picking one or the other, I’d say pick the one you like and bridges will help you connect to the other. But XMPP came way before matrix, and I believe they fractured the community instead of building it.
There’s a good reason all the big techs built on top of xmpp (meta, Google, etc). It’s a very good protocol and satisfies modern demands very well.
Okay so how does modern XMPP protect this? When I last used XMPP, some (not all) clients supported OTR-IM, a protocol for end to end encryption. And there wasn’t a function for server stored chat history (either encrypted or plaintext).
Have these issues been fixed?
It’s not perfect yet, but it’s much, much better than the old days.
OMEMO is supported by every major client, and they interoperate successfully. Unfortunately, most clients are stuck with an older version of the OMEMO spec. It’s not ideal, but it doesn’t cause any practical issue, unless you use Kaidan or UWPX, which only support the latest version.
All popular clients and servers support retrieving chat history now too.
In practice, I’ve been using it for several months to chat with friends and family, and haven’t had any issues.
Definitely not matrix.
I’ll give my usual contribution to RSS feed discourse, which is that, news flash! RSS feeds support video!
It drives me crazy when podcasters are like, “thanks for listening to our audio podcasts. We also have a video feed for our YouTube subscribers.” Just let me have the video in PocketCasts please!
I feel you but i dont think podcasters point to youtube for video feeds because of a supposed limitation of RSS. They do it because of the storage and bandwidth costs of hosting video.
I’d think they’d get it back by not having to share their ad rev with Google. There’s something to be said for the economies of scale Google benefits from but with cloud services that’s not as relevant as it was.
I just wrote a YouTube scraper and exported to RSS and into my podcast client. Using YouTube any other way is masochism in comparison.
Zigbee or really any Bluetooth alternative.
Bluetooth is a poorly engineered protocol. It jumps around the spectrum while transmitting, which makes it difficult and power intensive for bluetooth receivers to track.
I agree Bluetooth (at least Bluetooth Classic) is not very well designed, but not because of frequency hopping. That improves robustness and I don’t see why it would cost any more power. The hopping pattern is deterministic. Receivers know in advance which frequency to hop to.
This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.
Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:
But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.
Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.
This is actually exactly what I asked for, thank you!!
I have never understood why you can delegate a subdomain but not the root domain, I doubt it was a technical issue because they added support for it recently via
SVCB
records (But maybe technical concerns were actually fixed in the decades since)Don’t worry, in 5 or 10 years Google will develop an alternative and the rest of FAANG will back it. It will be super technically correct but will include a cryptographic signature that only big tech companies can issue.
Org-mode is like md but has tables and more. Emacs will even run computation as a party of interpretation. GitHub accepts it in place of markdown.
Would you say it’s worth considering in place of markdown for a non-emacs user? (I am curious to try emacs but I may not get to it anytime soon)
I can’t say that it is, no.
Haha appreciate the honesty :)
I do recommend emacs though. It is not the greatest editor, but it is an amazing experience. It is such an amazing experiment, that has an extensive set of different ways of looking at content and code - it will change how you think about coding.
org-mode is awesome for many reasons, but the similarities/overlap with markdown are an incidental benefit. I wouldn’t learn org-mode for that reason, however there are many other good ones that make it worthwhile. I’ve been using it for years for my own project management, tasks tracking, notes and many other things - it’s one of those rare tools that can do many things incredibly well.
IRC.
Jabber.
IPFS.
I also pick this guy’s IRC
Yes and RSS feeds.
GRPC for building APIs instead of REST. Type safety makes life easier
It’s the recommended approach to replace WCF which was deprecated after .NET framework 4.8. My company is just now getting around to ripping out all their WCF stuff and putting in gRPC. REST interfaces were always a non-starter because of how “heavyweight” they were for our use case (data collection from industrial devices which are themselves data collectors).
I mean, REST-ful JSON APIs can be perfectly type-safe, if their developers actually take care to make them that way. And the self-descriptive nature of JSON is arguably a benefit in really large public-facing APIs. But yeah, gRPC forces a certain amount of type-safety and version control, and gRPC with protobuf is SUCH a pleasure to work with.
Give it time, though, it’s definitely gaining traction.
The biggest problems with gRPC are:
Plain HTTP can be type safe. Just publish JSON schema or Typespec files or even use Protobuf.
Your concerns are all valid, but about 1 and 3 there are possible solutions. I’m using Rust+Tonic to build an API and that’s eliminate the necessity of proxies and it’s very simple to use.
I know that it don’t solve all problems, but IMHO is a question of adoption. Easier
toldtools will be develop for it.Am I the only one who is weirded out? Requiring a web server for something and then requiring another server if you want it to actually work on the web?
How expensive do people want to make their deployments?
I like the concept and I think the use case is almost covered by generating API client through generated OpenAPI spec.
It’s needs a bit of setup but a client library can be built whenever a backend server is built.
(Holocene or) Human Era calendar
That would represent all human history as one.
That sounds interesting, would most likely not be very popular with lots of people and a pain in the butt to implement but interesting.
There’s a cool video from In a Nutshell about it some years ago.
And also, the Dekatrian calendar
Where we would have a less broken, more regular, year calendar that is almost align with the moon cycle.
<img alt="2017 example" src="https://lemmy.zip/pictrs/image/0a182b6c-8a75-4ccd-ad1f-5a7d4048a34e.webp">
Oh many years ago in school I created something like that for an arts/creative writing project once, a calendar with 12, 30 day month based on sailor moon. Having it based on a magical girl manga gave me the freedom to declare the rest of the days to “days of evil” Was a fun project because I created a whole religion around it. 😁
TOML instead of YAML or JSON for configuration.
YAML is complex and has security concerns most people are not aware of.
JSON works, but the block quoting and indenting is a lot of noise for a simple category key value format.
TOML is not a very good format IMO. It’s fine for very simple config structures, but as soon as you have any level of nesting at all it becomes an unobvious mess. Worse than YAML even.
What is this even?
That’s an example from the docs, and I have literally no idea what structure it makes. Compare to the JSON which is far more obvious:
The fact that they have to explain the structure by showing you the corresponding JSON says a lot.
JSON5 is much better IMO. Unfortunately it isn’t as popular and doesn’t have as much ecosystem support.
You’re using a purposely convoluted example from the spec. And I think it shows exactly how TOML is better than JSON for creating config files.
The TOML file is a lot easier to scan than the hopelessly messy json file. The mix of indentation and symbols used in JSON really does not do well in bigger configuration files.
Nice. I mostly use Qt JSON and upon reading the spec, I see at least a few things I would want to have out of this, even when using it for machine-machine communication
YAML is racist to Norwegians.
If you have something like
country: NO
(NO = Norway), YAML will turn that intocountry: False
. Why? Implicit casting. There are a bunch of truthy strings that’ll be cast automagically.That’s “country-ist”. Nothing to do with the genes of people living over there.
True, but that sounds boring.
What in tarnation
People bitch about YAML but for me it’s still the preferred one just because the others suck more.
TOML like said is fine for simple things but as soon as you get a bit more complex it’s messy and unwieldy. And JSON is fine to operate on but for a config? It’s a mess. It’s harder to type and read for something like a config file.
Heck, I’m not even sold on the S-expressions compared to yaml yet. But then, I deal with so much with all of these formats that I simply still prefer YAML for readability and ease of use (compared to the others.)
I wish standards were always open access. Not behind a 600 dollar paywall.
When it is paywalled I’m irritated it’s even called a standard.
DP >> HDMI
I’d like something akin to XML DOM for config files, but not XML.
The one benefit of binary config (like the Windows Registry) is that you can make a change programmatically without too many hoops. With text files, you have a couple of choices for programmatic changes:
That last one probably exists for very specific formats for very specific languages, but it’s not common. It’s a little more cumbersome to use as a programmer–anyone who has worked with XML DOM will attest to that–but it’s a lot nicer for end users.
Have you heard about KDL?
djot for text markup. It addresses a lot of the issues in Common mark (and of course far more of the issues of Markdown).
I like the Doxygen’s implementation and extension of Markdown. Pair it with PlantUML and you have something worth being a standard.
.
Very much the same. I was terrified of regex, now I love it
What resource did you use to master it? As every time I have to use regex I want to cry.
.