help-circle
rss
>TruthFinder and Instant Checkmate are subscription-based services allowing customers to perform background checks on other people. When conducting background checks, the sites will use publicly scraped data, federal, state, and court records, criminal records, social media, and other sources.

AIs as Computer Hackers - Schneier on Security
>In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense. >There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

Foundational DevOps Patterns
> Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them. > > In order to tackle this issue, this paper proposes **four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring**. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the **patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations**. --- The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6): - **Cloud Infrastructure**, which includes cloud computing, scaling, infrastructure as a code, ... - **Pipeline**, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present." ![Overview of the pattern candidates and their relation](https://group.lt/pictrs/image/0d291dda-7c3d-44b5-84f2-1b2630ebf49d.png) The paper is interesting for the following structure in describing the patterns: > - Name: An evocative name for the pattern. > - Context: Contains the context for the pattern providing a background for the problem. > - Problem: A question representing the problem that the pattern intends to solve. > - Forces: A list of forces that the solution must balance out. > - Solution: A detailed description of the solution for our pattern’s problem. > - Consequences: The implications, advantages and trade-offs caused by using the pattern. > - Related Patterns: Patterns which are connected somehow to the one being described. > - Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.
  • cross-posted to:
  • bofh


matrix.group.lt Homeserver version: Synapse 1.76.0
Updated - please report issues.
1
matrix.group.lt Homeserver version: Synapse 1.76.0

>Tracked as CVE-2023-22501, the vulnerability has a critical severity score of 9.4, as calculated by Atlassian. It could be used to target bot accounts in particular, due to their frequent interactions with other users and their increased likelihood to be included in Jira issues or requests or receiving emails with a "View Request" link - either condition being necessary for acquiring signup tokens.

Always interesting to read real world applications of the concepts. Nubank's framework is a mix of storytelling, design thinking, empathy mapping, ... > storytelling can be used to develop better products around the idea of understanding and executing the “why’s” and “how’s” of the products. Using the techniques related to it, such as research, we can simplify the way we pass messages to the user. Nubank's framework has three phases: > 1. Understanding: properly understand the customer problem. After that, we can create our first storyboard. When working on testing with users, a framework is good to guarantee that we’re considering all of our ideas. > 2. Defining: how we’re going to communicate the narrative. As you can see, the storyboard is very strategic when it comes to helping influence the sequence of events and craft the narrative. Here the "movie script" is done. Now make de "movie's scene". > 3. Designing: translate the story you wrote, because, before you started doing anything, you already knew what you were going to do. Just follow what you have planned... Understanding the pain points correctly, we also start to understand our users actions and how they think. When we master this, we can help the customer take the actions in the way that we want them to, to help them to achieve their goals. > 4. Call to action: By knowing people’s goals and paint points, whether emotional or logistical, we can anticipate their needs.... guarantee that it is aligned with the promises we made to the customer, especially when it comes to marketing. Ask yourself if what you’re saying in the marketing campaigns are really what will be shown in the product.

Attention economy is a pretty important concept in today's socioeconomic systems. Here an article by Nielsen Norman Group explaining it a bit in the context of digital products. > Digital products are competing for users’ limited attention. The modern economy increasingly revolves around the human attention span and how products capture that attention. > > Attention is one of the most valuable resources of the digital age. For most of human history, access to information was limited. Centuries ago many people could not read and education was a luxury. Today we have access to information on a massive scale. Facts, literature, and art are available (often for free) to anyone with an internet connection. > > We are presented with a wealth of information, but we have the same amount of mental processing power as we have always had. The number of minutes has also stayed exactly the same in every day. Today attention, not information, is the limiting factor. There are many scientific works on the topic; here some queries in computer science / software engineering databases: - [IEEE Xplore](https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=%22attention%20economy%22) - [ACM DL](https://dl.acm.org/action/doSearch?fillQuickSearch=false&target=advanced&expand=dl&AllField=AllField%3A%28%22attention+economy%22%29) - [arXiv](https://arxiv.org/search/?query=%22attention+economy%22&searchtype=all&source=header) Another related article by NN/g: [The Vortex: Why Users Feel Trapped in Their Devices](https://www.nngroup.com/articles/device-vortex/)

>Most people do not understand how greatly thought influences mankind. Here I want to make a distinction between thinking and thought. The resultant of many acts of thinking becomes the general program called thought, which can be stored in a library or computer or in one’s own brain and then consulted later. Once a thought has been made into a program in that way, you do not have to think it again: you have it in storage. Now, this is a great labor-saving device—all that we see around us as civilization and technology is the result of the thought of past centuries, which we do not have to repeat—but it is also dangerous. If we do not have to think again, if we can just consult old thoughts, then we may become mechanical.

The two paradigms of software development research
> ## Highlights > > - Software development research is divided into two incommensurable paradigms. > - The **Rational Paradigm** emphasizes problem solving, planning and methods. > - The **Empirical Paradigm** emphasizes problem framing, improvisation and practices. > - The Empirical Paradigm is based on data and science; the Rational Paradigm is based on assumptions and opinions. > - The Rational Paradigm undermines the credibility of the software engineering research community. --- Very good paper by @paulralph@mastodon.acm.org discussing Rational Paradigm (non emprirical) and Empiriral Paradigm (evidence-based, scientific) in software engineering. Historically the Rational Paradigm has dominated both the software engineering research and industry, which is also evident in software engineering international standards, bodies of knowledge (e.g. IEEE CS SWEBOK), curriculum guidelines, ... Basically, much of the "standard" knowledge and mainstream literature has no basis in science, but "guru" knowledge. But people rarely follow rational approaches successfully or faithfully, which suggest using detailed plans, ... It also argues that currently software engineering is at level 2 in a "informal scale of empirical commitment". In comparison, medicine is at level 4 (greatest level in empirical commitment). ![informal scale of empirical commitment](https://group.lt/pictrs/image/54e3e1d0-c8c4-4dbc-be02-a137ff472026.png) > I think SE is at level two. Most top venues expect empirical data; however, that data often does not directly address effectiveness. Empirical findings and rigorous studies compete with non-empirical concepts and anecdotal evidence. For example, some reviews of a recent paper on software development waste [168] criticized it for its limited contribution over previous work [169], even though the previous work was based entirely on anecdotal evidence and the new paper was based on a rigorous empirical study. Meanwhile, many specialist and second-tier venues do not require empirical data at all. And concludes with some implications > 1. Much research involves developing new and improved development methods, tools, models, standards and techniques. Researchers who are unwittingly immersed in the Rational Paradigm may create artifacts based on unstated Rational-Paradigm assumptions, limiting their applicability and usefulness. For instance, the project management framework PRINCE2 prescribes that the project board (who set project goals) should not be the same people as project team (who design the system [108]). This is based on the Rationalist assumption that problems are given, and inhibits design coevolution. > > 2. Having two paradigms in the same academic community causes miscommunication [4], which undermines consensus and hinders scientific progress [171]. The fundamental rationalist critique of the Empirical Paradigm is that it is patently obvious that employing a more systematic, methodical, logical process should improve outcomes [7], [23], [119], [172], [173]. The fundamental empiricist critique of the Rational Paradigm is that there is no convincing evidence that following more systematic, methodical, logical processes is helpful or even possible [3], [5], [9], [12]. As the Rational Paradigm is grounded in Rationalist epistemology, its adherents are skeptical of empirical evidence [23]; similarly, as the Empirical Paradigm is grounded in empiricist epistemology, its adherents are skeptical of appeals to intuition and common sense [5]. In other words, scholars in different paradigms talk past each other and struggle to communicate or find common ground. > > 3. Many reasonable professionals, who would never buy a homeopathic remedy (because a few testimonials obviously do not constitute sound evidence of effectiveness) will adopt a software method or practice based on nothing other than a few testimonials [174], [175]. Both practitioners and researchers should demand direct empirical evaluation of the effectiveness of all proposed methods, tools, models, standards and techniques (cf. [111], [176]). When someone argues that basic standards of evidence should not apply to their research, call this what it is: the special pleading fallacy [177]. Meanwhile, peer reviewers should avoid criticizing or rejecting empirical work for contradicting non-empirical legacy concepts. > > 4. The Rational Paradigm leads professionals “to demand up-front statements of design requirements” and “to make contracts with one another on [this] basis”, increasing risk [5]. The Empirical Paradigm reveals why: as the goals and desiderata coevolve with the emerging software product, many projects drift away from their contracts. This drift creates a paradox for the developers: deliver exactly what the contract says for limited stakeholder benefits (and possible harms), or maximize stakeholder benefits and risk breach-of-contract litigation. Firms should therefore consider alternative arrangements including in-house development or ongoing contracts. > > 5. The Rational Paradigm contributes to the well-known tension between managers attempting to drive projects through cost estimates and software professionals who cannot accurately estimate costs [88]. Developers underestimate effort by 30–40% on average [178] as they rarely have sufficient information to gauge project difficulty [18]. The Empirical Paradigm reveals that design is an unpredictable, creative process, for which accounting-based control is ineffective. > > 6. Rational Paradigm assumptions permeate IS2010 [70] and SE2014 [179], the undergraduate model curricula for information systems and software engineering, respectively. Both curricula discuss requirements and lifecycles in depth; neither mention Reflection-in-Action, coevolution, amethodical development or any theories of SE or design (cf. [180]). Nonempirical legacy concepts including the Waterfall Model and Project Triangle should be dropped from curricula to make room for evidenced-based concepts, models and theories, just like in all of the other social and applied sciences. --- > ## Abstract > The most profound conflict in software engineering is not between positivist and interpretivist research approaches or Agile and Heavyweight software development methods, but between the Rational and Empirical Design Paradigms. The Rational and Empirical Paradigms are disparate constellations of beliefs about how software is and should be created. The Rational Paradigm remains dominant in software engineering research, standards and curricula despite being contradicted by decades of empirical research. The Rational Paradigm views analysis, design and programming as separate activities despite empirical research showing that they are simultaneous and inextricably interconnected. The Rational Paradigm views developers as executing plans despite empirical research showing that plans are a weak resource for informing situated action. The Rational Paradigm views success in terms of the Project Triangle (scope, time, cost and quality) despite empirical researching showing that the Project Triangle omits critical dimensions of success. The Rational Paradigm assumes that analysts elicit requirements despite empirical research showing that analysts and stakeholders co-construct preferences. The Rational Paradigm views professionals as using software development methods despite empirical research showing that methods are rarely used, very rarely used as intended, and typically weak resources for informing situated action. This article therefore elucidates the Empirical Design Paradigm, an alternative view of software development more consistent with empirical evidence. Embracing the Empirical Paradigm is crucial for retaining scientific legitimacy, solving numerous practical problems and improving software engineering education.

- Google: [AppSheet](https://appsheet.com/) - Apple: [SwiftUI](https://developer.apple.com/xcode/swiftui/) - Microsoft: [PowerApps](https://powerapps.microsoft.com) - Amazon: [HoneyCode](https://www.honeycode.aws/), [Amplify Studio](https://aws.amazon.com/amplify/studio/)


I remember when reverse tunnel was a security issue - now it is widely used!


All this data somewhere in the wrong hands (I mean - ODIN intelligence) "lost"

Not sure how this got into my reading list, nevertheless it was interesting to read about Borges' thoughts

@seresearchers@a.gup.pe a software engineering researchers group on Mastodon (Guppe Groups)
There are people/researchers from ACM and so on sharing pretty interesting, useful content about software engineering.


eBPF is a power horse you don't really need to care about (unless you want to)

> "Ong's Hat is one of the earliest Internet-based secret history conspiracy theories. It was created as a piece of collaborative fiction by four core individuals, dating back to the 1980s, although the membership propagating the tale changed over time. Ong's Hat is often cited as the first ARG on many lists of alternate reality games. >The characters were largely based in the ghost town of Ong's Hat, New Jersey, hence the name of the project." from [Wikipedia](https://en.wikipedia.org/wiki/Ong%27s_Hat)

> Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports. > >The US nonprofit, which is known for testing consumer products, **asked what steps can be taken to help usher in "memory safe" languages, like Rust**, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue.  > >The [report](https://advocacy.consumerreports.org/research/report-future-of-memory-safety/), Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability. More information: - https://advocacy.consumerreports.org/research/report-future-of-memory-safety/ - https://advocacy.consumerreports.org/wp-content/uploads/2023/01/Memory-Safety-Convening-Report.pdf


Built with eBPF & OpenTelemetry - Applications are instrumented using well-known, battle-tested open source observability technologies

The hardest scaling issue by Codeberg (a nonprofit, free software platform/service for code hosting)
> This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > **The hardest scaling issue is: scaling human power.** > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems... TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.



Book list that has shaped the podcast.




> How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action? > > Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030. For students. Mostly interesting for promoting the sustainable goals.

> We present the 10-most visited posts of the previous year. This year’s list of top 10 posts highlights our work in **deepfakes, artificial intelligence, machine learning, DevSecOps, and zero trust**. 1. https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/ 1. https://insights.sei.cmu.edu/blog/what-is-explainable-ai/ 1. https://insights.sei.cmu.edu/blog/a-technical-devsecops-adoption-framework/ 1. https://insights.sei.cmu.edu/blog/a-hitchhikers-guide-to-ml-training-infrastructure/ 1. https://insights.sei.cmu.edu/blog/a-case-study-in-applying-digital-engineering/ 1. https://insights.sei.cmu.edu/blog/two-categories-of-architecture-patterns-for-deployability/ 1. https://insights.sei.cmu.edu/blog/the-zero-trust-journey-4-phases-of-implementation/ 1. https://insights.sei.cmu.edu/blog/tactics-and-patterns-for-software-robustness/ 1. https://insights.sei.cmu.edu/blog/containerization-at-the-edge/ 1. https://insights.sei.cmu.edu/blog/probably-dont-rely-on-epss-yet/



The Engineers Are Bloggers Now
> Cristian Velazquez, a staff site reliability engineer at Uber, helped fix an important issue for the company's software in 2021. Then Uber asked him to write about it on the company's engineering blog. His post has generated over 84,000 page views since it was published. > > Uber is one of several large companies hoping to reach engineers this way. Organizations like Google, Apple, and Meta are also in the blogging game. > > The sites combine glimpses into what life is like at a company with case studies about complex programming tasks. The posts tend to have the titles of grad school papers and the editorial flair of instruction manuals. They're often created to increase transparency, provide resources to the engineering community — and entice people to go work at these companies. Some companies' engineering feeds which I follow - https://developers.googleblog.com/feeds/posts/default - https://medium.com/feed/google-developer-experts - https://aws.amazon.com/blogs/aws/feed - https://developer.apple.com/news/rss/news.rss - https://engineering.fb.com/feed/ - https://azurecomcdn.azureedge.net/en-us/blog/feed/ - https://building.nubank.com.br/feed/ - https://www.uber.com/blog/engineering/rss - https://medium.com/feed/airbnb-engineering - https://medium.com/feed/pinterest-engineering

Combining Design Thinking with Lean Startup and Agile
> **Design Thinking** could have really helped to understand the problem customers were facing (They were looking to to study new concepts, but moreover discuss ideas with their peers in class so interactive group learning). > > **Lean Startup** would have helped to avoid the problem of building something people were not looking for (training without Powerpoint), > > and **Agile** could have helped to cut the dev cycle with 50% by just building iteratively. > >Gartner introduced a model in 2016 where they connected these three models. > > **Gartner: Combine Design Thinking, Lean Startup and Agile to Drive Digital Innovation** More information: - https://www.productpizza.com/combining-design-thinking-with-lean-startup-and-agile/ - https://www.gartner.com/en/documents/3941917 (2019) - https://www.gartner.com/en/documents/3200917 (2016)

>Expand your horizons by trying out 12 different programming languages in 2023. > >Go old-school with COBOL, cutting edge with Unison or esoteric with Prolog. Explore low-level code with Assembly, expressions with a Lisp or functional with Haskell! More information: - https://exercism.org/challenges/12in23 - https://forum.exercism.org/t/the-12in23-challenge/2213

> Stakeholders’ buy-in and support is an integral component of success for any UX project, as they translate into resources, bandwidth, and approval. However, navigating stakeholder dynamics requires a thoughtful mix of listening, collaboration, communication, influence, and negotiation. This balancing act leads to stakeholder engagement and ultimately creates successful, long-term relationships. > > Continuous communication with stakeholders is important for any UX project — first, because it helps them understand and appreciate what UX does and, second, because it helps UX learn about other essential aspects of the business. Despite this duality, the burden of communication usually falls on UX — because stakeholders are inherently busy and possibly focused on many other things besides UX.

Software Engineering Institute’s DevSecOps Platform-Independent Model (PIM), Version 2.1
> DevSecOps is an engineering practice that promotes collaboration among development, security, and operations. When implemented, it creates a socio-technical system that uses automation for flexible, rapid, frequent delivery of secure infrastructure and software to production. Software development organizations must tailor each DevSecOps pipeline to the people, processes, and technology needed to provide a product or service. Until recently, there was no consistent basis for managing software-intensive development, cybersecurity, and operations in distributed systems. > > Then in May, the SEI released version 1.0 of the DevSecOps PIM, a reusable reference architecture for DevSecOps pipelines. Software development organizations can use the online, interactive PIM as a reference architecture or assessment tool for their own DevSecOps pipelines. More information: - https://cmu-sei.github.io/DevSecOps-Model/ - https://insights.sei.cmu.edu/news/devsecops-platform-independent-model-receives-major-update/