thesmokingman

joined 1 year ago
[–] [email protected] 3 points 1 hour ago (1 children)

Yeah! At scale that really falls apart. I have lots of conversations with lots of people across timezones so waiting for the intersection of everyone actively blocks work.

Asynchronous communication is exactly that. If you are not listening when your manager says “don’t Slack after work” that’s on you. I sure fucking don’t and I make that very clear.

[–] [email protected] 12 points 5 hours ago (3 children)

I manage a workforce across time zones and, as someone with ADHD, it’s usually best for me to fire off messages as things arise. If I read the summary, I’m not allowed to Slack/email after hours, which creates a huge burden for a remote workforce. I think that summary is incorrect and it’s more that I can’t force people to respond or even read those messages outside their work hours. I completely support this and I regularly bother my team when they respond to stuff after their day has ended. I call this out every quarter as we update our team working agreement. I don’t have any notifications set up for work comms period and have made it very clear the only way to get in touch with me is a phone call.

[–] [email protected] 1 points 19 hours ago (1 children)

That’s not how that works.

[–] [email protected] 1 points 22 hours ago (3 children)

There is literally no way to opt out of Google’s data collection if you are going to use their products. Using another frontend shifts the data profile but it still exists and provides value to them. It’s reasonable to say it’s a bad thing. It’s unreasonable to say there are no other ways. I grew up in a public library and I can still get most of the information I need from a public library without Google products (things I can’t get usually come through inter-library loan or direct connections with subject matter experts at, say, a maker space). This seems to be less of “I’m against invasive corporations” and more of a “I don’t like the solutions available to avoid invasive corporations.”

[–] [email protected] -3 points 1 day ago (5 children)

If you care about that you don’t use YouTube at all or support creators that do. Even using 3rd party apps or services feeds into that. This feels like a serious non sequitur on any thread about any Google product.

[–] [email protected] 8 points 1 day ago (7 children)

I pay for YouTube Family. I consume a lot of YouTube and I want to support the creators I watch. At its current price point, YouTube Family is reasonable. Several households in my family get ad-free YouTube for what is a reasonably low price point for each household.

If the price goes up much (eg if I were paying the single price of $11 per household), the creators I really enjoy continue to get pushed out or change content because of shitty ad rules, or they pull the whole “must be in the same household” bullshit I would drop it in a heartbeat just like I’ve dropped most streaming providers. Streaming has become cable and YouTube has been shooting itself in the foot by forcibly changing content for advertisers. I come to the platform for content, not advertisers.

[–] [email protected] 2 points 2 days ago

I think it’s because Taskmaster has its own streaming whatever there. No one in the US carries it so I watch it all through the official YouTube channel.

Granted this is a clip show so I’m not sure why it’s restricted period.

[–] [email protected] 80 points 4 days ago (6 children)

Other answers have only called out rotating the secret which is how you fix this specific failure. After you’ve rotated, delete the key from the repo because secrets don’t belong in repos. Next look at something like git-secrets or gitleaks to use as a local pre-commit hook to help prevent future failures. You’re human and you’re going to make mistakes; plan for them.

Another good habit to be in is to only access secrets from environment variables. I personally use direnv whose configuration file is globally ignored via the core.excludesfile.

You can add other strategies for good defense-in-depth such as a pre-receive hook checking for secrets to ensure no one can push them (eg they didn’t install hooks).

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago)

Thanks for citing for me. This is exactly what I was referring to!

[–] [email protected] 125 points 1 week ago (21 children)

Teens are constantly sleepy because that’s how teens work. School start times especially make it impossible to for them to get proper sleep. I’d say it’s ridiculous that someone who has authority over teens doesn’t understand the fucking basics of teens but it’s the Us criminal justice system where authority is made up and the credentials don’t matter.

[–] [email protected] 18 points 1 week ago

To be clear, usually there’s an approval gate. Something is generated automatically but a product or business person has to actually approve the alert going out. Behind the scenes everyone internal knows shit is on fire (unless they have shitty monitoring, metrics, and alerting which is true for a lot of places but not major cloud or SaaS providers).

[–] [email protected] 7 points 1 week ago

Speaking from 10+ YoE developing metrics, dashboards, uptime, all that shit and another 5+ on top of that at an exec level managing all that, this is bullshit. There is a disconnect between the automated systems that tell us something is down and the people that want to tell the outside world something is down. If you are a small company, there’s a decent chance you’ve launched your product without proper alerting and monitoring so you have to manually manage outages. If you are GitHub or AWS size, you know exactly when shit hits the fan because you have contracts that depend on that and you’re going to need some justification for downtime. Assuming a healthy environment, you’re doing a blameless postmortem but you’ve done millions of those at that scale and part of resolving them is ensuring you know before it happens again. Internally you know when there is an outage; exposing that externally is always about making yourself look good not customer experience.

What you’re describing is the incident management process. That also doesn’t require management input because you’re not going to wait for some fucking suit to respond to a Slack message. Your alarms have severities that give you agency. Again, small businesses sure you might not, but at large scale, especially with anyone holding anything like a SOC2, you have procedures in place and you’re stopping the bleeding. You will have some level of leadership that steps in and translates what the individual contributors are doing to business speak; that doesn’t prevent you from telling your customers shit is fucked up.

The only time a company actually needs to properly evaluate what’s going on before announcing is a security incident. There’s a huge difference between “my honeypot blew up” and “the database in this region is fucked so customers can’t write anything to it; they probably can’t use our product.” My honeypot blowing up might be an indication I’m fucked or that the attackers blew up the honeypot instead of anything else. Can’t send traffic to a region? Literally no reason the customer would be able to so why am I not telling them?

I read your response as either someone who knows nothing about the field or someone on the business side who doesn’t actually understand how single panes of glass work. If that’s not the case, I apologize. This is a huge pet peeve for basically anyone in the SRE/DevOps space who consumes these shitty status pages.

view more: next ›