Software Engineering

304 readers
1 users here now

Software Engineering is the systematic and engineered development of software in all its life cycle.


Rules

  1. Keep related to software engineering
  2. Keep comments on-topic of the post
  3. Try to post free/open access content
  4. Try to post content from reliable sources (ACM, IEEE, SEI, NN/G, ...), or useful content in general
  5. Relevant questions are welcone, as long they are genuine and respectful
  6. Be genuinely respectful, kind, helpful; act in and assume good faith
  7. No discrimination
  8. No personal attacks, no personal questions
  9. No attention stealing: no ads, spam, influencers influencing, memes, trolling, emotional manipulation/advertising (e.g. engagement through enragement or other negative emotions), jokes that dissipate the focus of the topic, ...

Resources

founded 2 years ago
MODERATORS
26
 
 
  1. Affectiva's Emotion AI ... facial analysis and emotion recognition to understand user emotional responses
  2. Ceros' Gemma ... generate new ideas, optimize existing designs, ... learn from your ideas and creative inputs, providing designers with personalized suggestions
  3. A/B Tasty ... UX designers to run A/B tests and optimize user experiences
  4. Slickplan ... sitemap generator and information architecture tool
  5. SketchAR ... creating accurate sketches and illustrations
  6. Xtensio ... user personas, journey maps, and other UX design deliverables
  7. Voiceflow ... create voice-based applications and conversational experiences

PS: Sounds like an ad, but still interesting to see tools in the wild that support AI for these conceptual phases in software engineering

27
 
 

In today’s hybrid work landscape, meetings have become abundant, but unfortunately, many of them still suffer from inefficiency and ineffectiveness. Specifically, meetings aimed at generating ideas to address various challenges related to people, processes, or products encounter recurring issues. The lack of a clear goal in these meetings hinders active participation, and the organizer often dominates the conversation, resulting in a limited number of ideas that fail to fully solve the problem. Both the organizer and attendees are left feeling dissatisfied with the outcomes.

...

The below sample agenda assumes that problem definition is clear. If that is not the case, hold a session prior to the ideation session to align on the problem. Tools such as interviewing, Affinity Mapping, and developing User Need statements and "How Might We" questions can be useful in facilitating that discussion. A sample agenda for ideation sessions

Estimated time needed: 45–60 minutes

  1. Introduction & ground rules (2 minutes)
  • Share the agenda for the ideation session.
  • Review any ground rules or guidelines for the meeting.
  • Allow time for attendees to ask questions or seek clarification.
  1. Warm-up exercise (5–10 minutes)
  • Conduct a warm-up exercise to foster creativity and build rapport among participants such as 30 Circles or One Thing, Nine Ways.
  • Choose an activity that aligns with the goals of the meeting and reflects the activities planned for the session. A quick Google search for “warm-up exercises design thinking” will return several potential activities.
  1. Frame the problem (2 minutes)

    • Share a single artifact (slide, Word Doc, section of text in whiteboard tool) that serves as a summary of the problem, providing participants with a reference point to anchor their thinking and revisit as needed throughout the session.
  2. Guided ideation & dot voting (30 minutes)

  3. Next steps & closing remarks (5 minutes)

    • Assign owners or champions for the selected ideas who will be responsible for driving their implementation (if not already known).
    • Summarize the key decisions made and actions to be taken.
    • Clarify any follow-up tasks or assignments.
    • Express gratitude for participants’ contributions and conclude the meeting on a positive note.
28
1
Memory Allocation (samwho.dev)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

After reading this: I know Kung-Fu

29
1
Waterfall (beza1e1.tuxen.de)
submitted 1 year ago by [email protected] to c/[email protected]
30
 
 

Reminded me of Kevin Kelly's book Out of Control

31
 
 

As the flagship track at QCon, Architectures You've Always Wondered About showcases real-world examples of innovator companies pushing the limits with modern software systems.

32
 
 

Always interesting to see real life design choices.

33
 
 

cross-posted from: https://group.lt/post/65921

Saving for the comparison with the next year

34
 
 

AI recruitment post as it seems

35
 
 

Kinda cool ;)

36
 
 

Leslie shares his journey into computing, which started out as something he only did in his spare time as a mathematician. Scott and Leslie discuss the differences and similarities between computer science and software engineering, the math involved in Leslie’s high-level temporal logic of actions (TLA), which can help solve the famous Byzantine Generals Problem, and the algorithms Leslie himself has created. He also reflects on how the building of distributed systems has changes since the 60s and 70s.

37
 
 

Specifically, we’re diving into a massive migration project by Khan Academy, involving moving one million lines of Python code and splitting them across more than 40 services, mostly in Go, as part of a migration that took 3.5 years and involved around 100 software engineers.

38
 
 

Always interesting to read real world applications of the concepts. Nubank's framework is a mix of storytelling, design thinking, empathy mapping, ...

storytelling can be used to develop better products around the idea of understanding and executing the “why’s” and “how’s” of the products. Using the techniques related to it, such as research, we can simplify the way we pass messages to the user.

Nubank's framework has three phases:

  1. Understanding: properly understand the customer problem. After that, we can create our first storyboard. When working on testing with users, a framework is good to guarantee that we’re considering all of our ideas.
  2. Defining: how we’re going to communicate the narrative. As you can see, the storyboard is very strategic when it comes to helping influence the sequence of events and craft the narrative. Here the "movie script" is done. Now make de "movie's scene".
  3. Designing: translate the story you wrote, because, before you started doing anything, you already knew what you were going to do. Just follow what you have planned... Understanding the pain points correctly, we also start to understand our users actions and how they think. When we master this, we can help the customer take the actions in the way that we want them to, to help them to achieve their goals.
  4. Call to action: By knowing people’s goals and paint points, whether emotional or logistical, we can anticipate their needs.... guarantee that it is aligned with the promises we made to the customer, especially when it comes to marketing. Ask yourself if what you’re saying in the marketing campaigns are really what will be shown in the product.
39
 
 

Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them.

In order to tackle this issue, this paper proposes four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations.


The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6):

  • Cloud Infrastructure, which includes cloud computing, scaling, infrastructure as a code, ...
  • Pipeline, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present."

Overview of the pattern candidates and their relation

The paper is interesting for the following structure in describing the patterns:

  • Name: An evocative name for the pattern.
  • Context: Contains the context for the pattern providing a background for the problem.
  • Problem: A question representing the problem that the pattern intends to solve.
  • Forces: A list of forces that the solution must balance out.
  • Solution: A detailed description of the solution for our pattern’s problem.
  • Consequences: The implications, advantages and trade-offs caused by using the pattern.
  • Related Patterns: Patterns which are connected somehow to the one being described.
  • Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
40
 
 

Attention economy is a pretty important concept in today's socioeconomic systems. Here an article by Nielsen Norman Group explaining it a bit in the context of digital products.

Digital products are competing for users’ limited attention. The modern economy increasingly revolves around the human attention span and how products capture that attention.

Attention is one of the most valuable resources of the digital age. For most of human history, access to information was limited. Centuries ago many people could not read and education was a luxury. Today we have access to information on a massive scale. Facts, literature, and art are available (often for free) to anyone with an internet connection.

We are presented with a wealth of information, but we have the same amount of mental processing power as we have always had. The number of minutes has also stayed exactly the same in every day. Today attention, not information, is the limiting factor.

There are many scientific works on the topic; here some queries in computer science / software engineering databases:

Another related article by NN/g: The Vortex: Why Users Feel Trapped in Their Devices

41
 
 
42
 
 

Highlights

  • Software development research is divided into two incommensurable paradigms.
  • The Rational Paradigm emphasizes problem solving, planning and methods.
  • The Empirical Paradigm emphasizes problem framing, improvisation and practices.
  • The Empirical Paradigm is based on data and science; the Rational Paradigm is based on assumptions and opinions.
  • The Rational Paradigm undermines the credibility of the software engineering research community.

Very good paper by @[email protected] discussing Rational Paradigm (non emprirical) and Empiriral Paradigm (evidence-based, scientific) in software engineering. Historically the Rational Paradigm has dominated both the software engineering research and industry, which is also evident in software engineering international standards, bodies of knowledge (e.g. IEEE CS SWEBOK), curriculum guidelines, ... Basically, much of the "standard" knowledge and mainstream literature has no basis in science, but "guru" knowledge. But people rarely follow rational approaches successfully or faithfully, which suggest using detailed plans, ...

It also argues that currently software engineering is at level 2 in a "informal scale of empirical commitment". In comparison, medicine is at level 4 (greatest level in empirical commitment).

informal scale of empirical commitment

I think SE is at level two. Most top venues expect empirical data; however, that data often does not directly address effectiveness. Empirical findings and rigorous studies compete with non-empirical concepts and anecdotal evidence. For example, some reviews of a recent paper on software development waste [168] criticized it for its limited contribution over previous work [169], even though the previous work was based entirely on anecdotal evidence and the new paper was based on a rigorous empirical study. Meanwhile, many specialist and second-tier venues do not require empirical data at all.

And concludes with some implications

  1. Much research involves developing new and improved development methods, tools, models, standards and techniques. Researchers who are unwittingly immersed in the Rational Paradigm may create artifacts based on unstated Rational-Paradigm assumptions, limiting their applicability and usefulness. For instance, the project management framework PRINCE2 prescribes that the project board (who set project goals) should not be the same people as project team (who design the system [108]). This is based on the Rationalist assumption that problems are given, and inhibits design coevolution.

  2. Having two paradigms in the same academic community causes miscommunication [4], which undermines consensus and hinders scientific progress [171]. The fundamental rationalist critique of the Empirical Paradigm is that it is patently obvious that employing a more systematic, methodical, logical process should improve outcomes [7], [23], [119], [172], [173]. The fundamental empiricist critique of the Rational Paradigm is that there is no convincing evidence that following more systematic, methodical, logical processes is helpful or even possible [3], [5], [9], [12]. As the Rational Paradigm is grounded in Rationalist epistemology, its adherents are skeptical of empirical evidence [23]; similarly, as the Empirical Paradigm is grounded in empiricist epistemology, its adherents are skeptical of appeals to intuition and common sense [5]. In other words, scholars in different paradigms talk past each other and struggle to communicate or find common ground.

  3. Many reasonable professionals, who would never buy a homeopathic remedy (because a few testimonials obviously do not constitute sound evidence of effectiveness) will adopt a software method or practice based on nothing other than a few testimonials [174], [175]. Both practitioners and researchers should demand direct empirical evaluation of the effectiveness of all proposed methods, tools, models, standards and techniques (cf. [111], [176]). When someone argues that basic standards of evidence should not apply to their research, call this what it is: the special pleading fallacy [177]. Meanwhile, peer reviewers should avoid criticizing or rejecting empirical work for contradicting non-empirical legacy concepts.

  4. The Rational Paradigm leads professionals “to demand up-front statements of design requirements” and “to make contracts with one another on [this] basis”, increasing risk [5]. The Empirical Paradigm reveals why: as the goals and desiderata coevolve with the emerging software product, many projects drift away from their contracts. This drift creates a paradox for the developers: deliver exactly what the contract says for limited stakeholder benefits (and possible harms), or maximize stakeholder benefits and risk breach-of-contract litigation. Firms should therefore consider alternative arrangements including in-house development or ongoing contracts.

  5. The Rational Paradigm contributes to the well-known tension between managers attempting to drive projects through cost estimates and software professionals who cannot accurately estimate costs [88]. Developers underestimate effort by 30–40% on average [178] as they rarely have sufficient information to gauge project difficulty [18]. The Empirical Paradigm reveals that design is an unpredictable, creative process, for which accounting-based control is ineffective.

  6. Rational Paradigm assumptions permeate IS2010 [70] and SE2014 [179], the undergraduate model curricula for information systems and software engineering, respectively. Both curricula discuss requirements and lifecycles in depth; neither mention Reflection-in-Action, coevolution, amethodical development or any theories of SE or design (cf. [180]). Nonempirical legacy concepts including the Waterfall Model and Project Triangle should be dropped from curricula to make room for evidenced-based concepts, models and theories, just like in all of the other social and applied sciences.


Abstract

The most profound conflict in software engineering is not between positivist and interpretivist research approaches or Agile and Heavyweight software development methods, but between the Rational and Empirical Design Paradigms. The Rational and Empirical Paradigms are disparate constellations of beliefs about how software is and should be created. The Rational Paradigm remains dominant in software engineering research, standards and curricula despite being contradicted by decades of empirical research. The Rational Paradigm views analysis, design and programming as separate activities despite empirical research showing that they are simultaneous and inextricably interconnected. The Rational Paradigm views developers as executing plans despite empirical research showing that plans are a weak resource for informing situated action. The Rational Paradigm views success in terms of the Project Triangle (scope, time, cost and quality) despite empirical researching showing that the Project Triangle omits critical dimensions of success. The Rational Paradigm assumes that analysts elicit requirements despite empirical research showing that analysts and stakeholders co-construct preferences. The Rational Paradigm views professionals as using software development methods despite empirical research showing that methods are rarely used, very rarely used as intended, and typically weak resources for informing situated action. This article therefore elucidates the Empirical Design Paradigm, an alternative view of software development more consistent with empirical evidence. Embracing the Empirical Paradigm is crucial for retaining scientific legitimacy, solving numerous practical problems and improving software engineering education.

43
 
 

There are people/researchers from ACM and so on sharing pretty interesting, useful content about software engineering.

44
 
 

Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports.

The US nonprofit, which is known for testing consumer products, asked what steps can be taken to help usher in "memory safe" languages, like Rust, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue. 

The report, Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability.

More information:

45
 
 

This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future.

When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time?

...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue.

The hardest scaling issue is: scaling human power.

Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have.

There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable!

I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away...

two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems...

TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

46
47
 
 

How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action?

Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030.

For students. Mostly interesting for promoting the sustainable goals.

48
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
49
 
 

Nice notes and links.

50
1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Cristian Velazquez, a staff site reliability engineer at Uber, helped fix an important issue for the company's software in 2021. Then Uber asked him to write about it on the company's engineering blog. His post has generated over 84,000 page views since it was published.

Uber is one of several large companies hoping to reach engineers this way. Organizations like Google, Apple, and Meta are also in the blogging game.

The sites combine glimpses into what life is like at a company with case studies about complex programming tasks. The posts tend to have the titles of grad school papers and the editorial flair of instruction manuals. They're often created to increase transparency, provide resources to the engineering community — and entice people to go work at these companies.

Some companies' engineering feeds which I follow

view more: ‹ prev next ›