[-] [email protected] 6 points 5 days ago

Bloating HTTP and its implementations for REST-specific use-cases

I have no idea what are you talking about. Setting a request/response header is not bloating HTTP. That's like claiming that setting a field in a response body is bloating JSON.

[-] [email protected] 3 points 5 days ago* (last edited 5 days ago)

Also, TIL that the IETF deprecated the X- prefix more than 10 years ago. Seems like that one didn’t pan out.

Can you elaborate on that? The X- prefix is supposedly only a recommendation, and intended to be used in non-standard, custom, ah-hoc request headers to avoid naming conflicts.

Taken from https://datatracker.ietf.org/doc/html/rfc6648

In short, although in theory the "X-" convention was a good way to avoid collisions (and attendant interoperability problems) between standardized parameters and unstandardized parameters, in practice the benefits have been outweighed by the costs associated with the leakage of unstandardized parameters into the standards space.

I still work on software that extendively uses X- headers.

11
submitted 5 days ago by [email protected] to c/[email protected]
7
Well, it's just an AWS Account ID! (mail.cloudsecurity.club)
submitted 6 days ago by [email protected] to c/[email protected]
[-] [email protected] 1 points 6 days ago

I don’t see why using submodules as a package manager should excuse their endless bugs.

I don't know what are these "endless bugs" you're talking about. Submodules might have a UX that's rough on the edges, but there are really no moving parts in them as they basically amount to cloning a repo and checking out a specific commit.

Do you actually have any specific, tangible issue with submodules? Even in the cases you're clearly and grossly misusing them

20
submitted 6 days ago by [email protected] to c/[email protected]
[-] [email protected] 6 points 1 week ago

It's interesting that the internet is packed with search hits of complains that Cloudflare's DNS is slowing everything but Cloudflare representatives are quick to post followups pointing the finger everywhere else.

7
std::try_cast and (const&&)=delete (quuxplusone.github.io)
submitted 1 week ago by [email protected] to c/[email protected]
7
submitted 1 week ago by [email protected] to c/[email protected]
14
submitted 1 week ago by [email protected] to c/[email protected]
[-] [email protected] 5 points 1 week ago

Asking this question is like asking when was the last time you had to search through text.

[-] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Aside from the obvious UX disaster, Git has some big issues:

I find this blend of claims amusing. I've been using Git for years on end, with Git LFS and rebase-heavy user flows, and for some odd reason I never managed to stumble upon these so-called "disasters". Odd.

What I do stumble upon are mild annoyances, such as having to deal with conflicts when reordering commits, or the occasional submodule hiccup because it was misused as a replacement for a package manager when it really shouldn't, but I would not call any of these "disasters". The only gripe I have with Git is the lack of a command to split a past commit into two consecutive commits (a reverse of a squash commit), specially when I accidentally bundled changes to multiple files that shouldn't have been bundled. It's nothing an interactive rebase doesn't solve, but it's multiple steps that could be one.

Can you point out what is the most disastrous disaster you can possibly conceive about Git? Just to have a clear idea where that hyperbole lies.

18
submitted 1 week ago by [email protected] to c/[email protected]
[-] [email protected] 8 points 2 weeks ago

There are no hard set rules, and it depends on what uses you have for the build number.

Making it a monotonically increasing number helps with versioning because it's trivial to figure out which version is newer. Nevertheless, you can also rely on semantic versioning for that. It's not like all projects are like Windows 10 and half a dozen major versions are pinned at 10.0.

You sound like you're focusing on the wrong problem. You first need to figure it what is your versioning strategy,and from there you need to figure out if a build number plays any role on it.

6
submitted 2 weeks ago by [email protected] to c/[email protected]
11
submitted 2 weeks ago by [email protected] to c/[email protected]
4
submitted 2 weeks ago by [email protected] to c/[email protected]
6
submitted 2 weeks ago by [email protected] to c/[email protected]
9
submitted 2 weeks ago by [email protected] to c/[email protected]
[-] [email protected] 2 points 1 month ago

Remembering ActiveX Controls, the Web’s Biggest Mistake:

Running JavaScript everywhere is looming as one of the biggest screwups in InfoSec. What do userscript extensions like Grease monkey teach us?

[-] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Ah, the Microsoft tradition of always having the wrong priorities.

I wouldn't be too hard on Microsoft. The requirement to curate public package repositories only emerged somewhat recently, as demonstrated by the likes of npm, and putting in place a process to audit and pull out offending packages might not be straight-forward.

I think the main take on this is to learn the lesson that it is not safe to install random software you come across online. Is this lesson new, though?

[-] [email protected] 1 points 1 month ago

Agile is not a system. It’s a set of principles, set by the Agile manifesto.

The Agile manifesto boils down to a set of priorities that aren’t even set as absolutes.

I strongly recommend you read upon Agile before blaming things you don’t like on things you don’t understand .

[-] [email protected] 1 points 1 month ago

ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.

That's highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.

When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come.

That's not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you're project is set, and misconfigurations.

Compilation of everything from scratch was an hour at the end.

That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.

Switching to lld was a huge win, as well as going from 12 to compilation 24 threads.

That's perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.

Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.

I was a linux dev there, the pch’s worked, (...)

I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it's a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.

[-] [email protected] 4 points 1 month ago

Also interesting, successful software projects don't just finish and die. They keep on going and adapt changes and implement new features. If we have a successful project that goes on for a decade but we have a clusterfuck of a project which blows up each year for the same time period, by this metric you'll have only a 10% success rate.

view more: next ›

lysdexic

joined 11 months ago
MODERATOR OF