Yuu Yin

Keyoxide: https://keyoxide.org/9f193ae8aa25647ffc3146b5416f303b43c20ac3

OpenPGP: openpgp4fpr:9f193ae8aa25647ffc3146b5416f303b43c20ac3

  • 48 Posts
  • 43 Comments
Joined 7M ago
cake
Cake day: Nov 08, 2022

help-circle
rss

Some of them will detect if using virtualization. For example http://safeexambrowser.org/ by ETH Zurich

Ironically enough, it is free software https://github.com/SafeExamBrowser


The nature of an ultra-faint galaxy in the cosmic Dark Ages seen with JWST https://arxiv.org/abs/2210.15639

Using as backend for a very important Web app (with possible IoT applications in the very future also) for me which I already conceptualized, have some prototypes, etc—this is what motivates me. I feel, for this project in specific, I shall first learn the offficial Book (which I am) and have a play with the recommended libraries and the take of Rust on Nails. I also have many other interesting projects in mind, and want to contribute to e.g. Lemmy (I have many Rust projects git cloned, including it).


their work essentially go in the trash

They learned a lot in the process probably, that is the most important for them after all. But relying on API is risky, so always go HTML scrapping. The frontends are super useful for finding information already there without accessing the actual website. Always use Lemmy here for everything else.


Human’s nature is to forget things; we are not good at retaining long-term information that we do not use (people do spaced repetition to try though). People are good at recognizing information; so do active learning and active recall with some methods like “take smart notes” to build a personal information knowledge in an authentic part of you that reorganized information from other sources of information and interconnected them. So then one can reuse that for a late final work (instead of going from step 0) demonstrating one’s mastering of such information.

But I do really think that traditional books are BAD at this point of development of the socioeconomic system. Now, information technology (computer applications, etc) need to do the most for not only describing knowledge, but being more engaging (with what we know about the human mind), easier to search, to interconnect, interact with… etc, for active learning and active recall. Jupyter Notebooks for example, … Really, using some apps to learn e.g. play piano adapt to us humans wayy more so way more easy to learn. The problem: books do not adapt to humans, software engineering in UX/UI take adapting to humans (instead of the opposite) as a main principle. And and now have artificial intelligence (GPT, etc) that are way better than most people at active learning and active recall.

What I find most value for me though is your take on fiction books; I do not read them anymore as I think they are a total waste of time for me at this point, just as most other forms of attention economy. Now I see fiction books (or anything not technical) can be useful to inform one’s about human nature, consciousness and unconsciousness including living in this global socioeconomic system. Of course, all with the subjectivity of fiction.


Alas from at least 5 years the major labels block booked a lot of the LP printing presses for reissues, making it impossible for independent artists to get a pressing without roughly a 9 month lag.

I see. In the case I mentioned, what I see is underground artists with underground labels since the beginning now partnering up with new underground labels who got the rights someway for reissuing, or releasing unreleased content.


Ive made a good side career selling lp vinyl, given how poor the contemporary state of the music industry is. The worse the state of the economy the more these old artifacts become assets of economic significant.

Oh I’ve seen this and I’m glad this is a thing; not exactly reselling old vinyls, but the fact underground artists are able to release new/old stuff in vinyl format with wonderfully-production made with the heart instead of solely profit in mind.

I would hate to be a young performer.

I have come to known a very good young performer, but that workpath was just impossible. As I see it, doing music or art at this point is only good for individual/collective human expression; totally unfeasible doing a career over it as it is meaningless at this point.


I’ve seen a ton of demand for data science. For anyone starting now I think it would be better to go AI/DS than software engineering, the latter which will generally require you 10 years of experience in a tech with 1 year existence haha


cross-posted from: https://group.lt/post/65921 > Saving for the comparison with the next year

@indieterminacy@lemmy.ml set community to also accept comments from undetermined language, otherwise have to manually specify English every time.


The entertainment industry is nothing but capitalism, there is no emotion left. So that will not make difference other than capitalism consequences. Everything is produced by a few companies and producers, and performers just perform for the profit, fame, influence of it. AI is just the next step. For this mainstream industry in specific, I do not hold feelings. As true artists are long forgotten anyway.


How could we attract the free and open source communities to Lemmy?
I suppose it only makes sense to raise awareness on the benefits of the freely licensed software and services from the fediverse over the dangerous and unethical proprietary services in existence such as Reddit now going to IPO. That happened to Twitter->Mastodon, can happen to Reddit->Lemmy as well. I suppose as well that the users most likely to be open to the idea would be the free software, culture users to try it. Besides, an effort on content creation and content creators to make it an attractive place. What are your thoughts? What were the efforts so far? What are the challenges? Is it so hard to make people migrate?

2 things that help me very much: perject for GNU Emacs, and Contexts + usual workspaces for Xmonad.


can’t believe it got to this point. heads on rust foundation seem more self-interested than community-oriented.


I had listened to it when you originally posted and had made some annotations, commenting some now

Lamport talks about all this “developers shall be ENGINEERS and know their math”, BUT most software engineering positions are not engineering and even less approach classical engineering. BECAUSE why spend effort learning math WHEN one can use all constructed abstractions to have a greater return on investment with less effort? I do not think people who do high level development need to know their math that they won’t use anyway; but those jobs will likely be automated earlier.

I think, of course, actual engineering comes down when one needs to do lower level development, depending on project domain, or things that need to be correct. I mean, systems cannot be actually 100% correct including the fact chips are proprietary so no way to fully verify.

Interesting to mention on the clocks paper and mention on actual implicit insight is on system’s components using the same commands/inputs/computations to have a same state machine, besides consensus algorithm for fault tolerance, and the mutual exclusion algorithm.

And the ideas coming up when working on problems.


KDE Connect has been very unreliable to me. I’m using magic wormhole now.


This dormant black hole is about 10 times more massive than the sun and is located about 1,600 light-years away in the constellation Ophiuchus, making it three times closer to Earth than the previous record holder.


Artificial constellations that pollute the night sky… I remember there was a popular paper on the negative effects of this.


The consequence of Docker Compose is that most people use podman containers the same way as they use docker containers. You first create the container, and then you figure a way out, how to restart the container on every reboot. And this approach does not work with podman auto-update, because it requires this process to be upside-down … Wait upside-down? … What do I mean with that?

The canonical way of starting podman containers at boottime is the creation of custom systemd units for them. This is cool and allows to have daemonless, independent containers running. podman itself provides a handy way of creating those system units, e.g. here for a new nginx container:

interesting… as far as i remember podman official docs say nothing about that; or at least i do not remember seeing anything. so i ended up using compose with the unofficial podman-compose, which ended up being very frustrating.

so i thought it was primarily meant for OpenShift instead.

maybe i’ll give podman another try now that i’m aware of that systemd integation.


Nightly builds of all official rust mdbooks in the EPUB format, The Rust eBookshelf
> This project aims at providing nightly builds of all official rust mdbooks in epub format. It is born out of the difficulty I encountered when starting my rust apprenticeship to find recent ebook versions of the official documentation. > > If you encounter any issue, have any suggestion or would like to improve this site and/or its content, please go to https://github.com/dieterplex/rust-ebookshelf/ and file an issue or create a pull request.

Always interesting to read real world applications of the concepts. Nubank's framework is a mix of storytelling, design thinking, empathy mapping, ... > storytelling can be used to develop better products around the idea of understanding and executing the “why’s” and “how’s” of the products. Using the techniques related to it, such as research, we can simplify the way we pass messages to the user. Nubank's framework has three phases: > 1. Understanding: properly understand the customer problem. After that, we can create our first storyboard. When working on testing with users, a framework is good to guarantee that we’re considering all of our ideas. > 2. Defining: how we’re going to communicate the narrative. As you can see, the storyboard is very strategic when it comes to helping influence the sequence of events and craft the narrative. Here the "movie script" is done. Now make de "movie's scene". > 3. Designing: translate the story you wrote, because, before you started doing anything, you already knew what you were going to do. Just follow what you have planned... Understanding the pain points correctly, we also start to understand our users actions and how they think. When we master this, we can help the customer take the actions in the way that we want them to, to help them to achieve their goals. > 4. Call to action: By knowing people’s goals and paint points, whether emotional or logistical, we can anticipate their needs.... guarantee that it is aligned with the promises we made to the customer, especially when it comes to marketing. Ask yourself if what you’re saying in the marketing campaigns are really what will be shown in the product.

Foundational DevOps Patterns
cross-posted from [!softwareengineering@group.lt](https://group.lt/c/softwareengineering): https://group.lt/post/46385 > Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them. > > In order to tackle this issue, this paper proposes **four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring**. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the **patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations**. --- The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6): - **Cloud Infrastructure**, which includes cloud computing, scaling, infrastructure as a code, ... - **Pipeline**, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present." ![Overview of the pattern candidates and their relation](https://group.lt/pictrs/image/0d291dda-7c3d-44b5-84f2-1b2630ebf49d.png) The paper is interesting for the following structure in describing the patterns: > - Name: An evocative name for the pattern. > - Context: Contains the context for the pattern providing a background for the problem. > - Problem: A question representing the problem that the pattern intends to solve. > - Forces: A list of forces that the solution must balance out. > - Solution: A detailed description of the solution for our pattern’s problem. > - Consequences: The implications, advantages and trade-offs caused by using the pattern. > - Related Patterns: Patterns which are connected somehow to the one being described. > - Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.

Foundational DevOps Patterns
cross-posted from [!softwareengineering@group.lt](https://group.lt/c/softwareengineering): https://group.lt/post/46385 > Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them. > > In order to tackle this issue, this paper proposes **four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring**. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the **patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations**. --- The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6): - **Cloud Infrastructure**, which includes cloud computing, scaling, infrastructure as a code, ... - **Pipeline**, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present." ![Overview of the pattern candidates and their relation](https://group.lt/pictrs/image/0d291dda-7c3d-44b5-84f2-1b2630ebf49d.png) The paper is interesting for the following structure in describing the patterns: > - Name: An evocative name for the pattern. > - Context: Contains the context for the pattern providing a background for the problem. > - Problem: A question representing the problem that the pattern intends to solve. > - Forces: A list of forces that the solution must balance out. > - Solution: A detailed description of the solution for our pattern’s problem. > - Consequences: The implications, advantages and trade-offs caused by using the pattern. > - Related Patterns: Patterns which are connected somehow to the one being described. > - Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.

Foundational DevOps Patterns
> Adopting DevOps practices is nowadays a recurring task in the industry. DevOps is a set of practices intended to reduce the friction between the software development (Dev) and the IT operations (Ops), resulting in higher quality software and a shorter development lifecycle. Even though many resources are talking about DevOps practices, they are often inconsistent with each other on the best DevOps practices. Furthermore, they lack the needed detail and structure for beginners to the DevOps field to quickly understand them. > > In order to tackle this issue, this paper proposes **four foundational DevOps patterns: Version Control Everything, Continuous Integration, Deployment Automation, and Monitoring**. The patterns are both detailed enough and structured to be easily reused by practitioners and flexible enough to accommodate different needs and quirks that might arise from their actual usage context. Furthermore, the **patterns are tuned to the DevOps principle of Continuous Improvement by containing metrics so that practitioners can improve their pattern implementations**. --- The article does not describes but actually identified and included 2 other patterns in addition to the four above (so actually 6): - **Cloud Infrastructure**, which includes cloud computing, scaling, infrastructure as a code, ... - **Pipeline**, "important for implementing Deployment Automation and Continuous Integration, and segregating it from the others allows us to make the solutions of these patterns easier to use, namely in contexts where a pipeline does not need to be present." ![Overview of the pattern candidates and their relation](https://group.lt/pictrs/image/0d291dda-7c3d-44b5-84f2-1b2630ebf49d.png) The paper is interesting for the following structure in describing the patterns: > - Name: An evocative name for the pattern. > - Context: Contains the context for the pattern providing a background for the problem. > - Problem: A question representing the problem that the pattern intends to solve. > - Forces: A list of forces that the solution must balance out. > - Solution: A detailed description of the solution for our pattern’s problem. > - Consequences: The implications, advantages and trade-offs caused by using the pattern. > - Related Patterns: Patterns which are connected somehow to the one being described. > - Metrics: A set of metrics to measure the effectiveness of the pattern’s solution implementation.

Software Engineering, a new community !softwareengineering@group.lt
[!softwareengineering@group.lt](https://group.lt/c/softwareengineering) We post and discuss software engineering related information: be it programming/construction, UX/UI, software architecture, DevSecOps, software economics, research, management, requirements, AI, ... It is meant as a serious, focused community that strives for sharing content from reliable sources, and free/open access as well.

Attention economy is a pretty important concept in today's socioeconomic systems. Here an article by Nielsen Norman Group explaining it a bit in the context of digital products. > Digital products are competing for users’ limited attention. The modern economy increasingly revolves around the human attention span and how products capture that attention. > > Attention is one of the most valuable resources of the digital age. For most of human history, access to information was limited. Centuries ago many people could not read and education was a luxury. Today we have access to information on a massive scale. Facts, literature, and art are available (often for free) to anyone with an internet connection. > > We are presented with a wealth of information, but we have the same amount of mental processing power as we have always had. The number of minutes has also stayed exactly the same in every day. Today attention, not information, is the limiting factor. There are many scientific works on the topic; here some queries in computer science / software engineering databases: - [IEEE Xplore](https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=%22attention%20economy%22) - [ACM DL](https://dl.acm.org/action/doSearch?fillQuickSearch=false&target=advanced&expand=dl&AllField=AllField%3A%28%22attention+economy%22%29) - [arXiv](https://arxiv.org/search/?query=%22attention+economy%22&searchtype=all&source=header) Another related article by NN/g: [The Vortex: Why Users Feel Trapped in Their Devices](https://www.nngroup.com/articles/device-vortex/)

it is a week since I have been using Perject as an alternative/replacement to packages like tabspaces and persp. Perject is much better than those, and I have yet to experience a bug. The workflow is: 1. create a collection of projects: perject-open-collection 2. create a project under a collection: perject-switch 3. add buffer to project: perject-add-buffer-to-project Then can create collections, projects, ... and switch between them from any frame. It does auto save state, and I found it very good in reloading the saved collections/projects; I have not experienced any bug with it while I would experience much bugs with persp... Also Perject integrates with GNU Emacs built-ins for all that, such as desktop, project.el, tab-bar, ... The author, overideal, released it recently, but it already is one my favorite Emacs packages. Really worth it trying it out. My config if anyone wants to try it out https://codeberg.org/yymacs/yymacs/src/branch/main/yyuu/mod/yyuu-mod-emacs-uix-space.el#L43-L111

cross-posted from [!softwareengineering@group.lt](https://group.lt/c/softwareengineering): https://group.lt/post/46120 - Google: [AppSheet](https://appsheet.com/) - Apple: [SwiftUI](https://developer.apple.com/xcode/swiftui/) - Microsoft: [PowerApps](https://powerapps.microsoft.com) - Amazon: [HoneyCode](https://www.honeycode.aws/), [Amplify Studio](https://aws.amazon.com/amplify/studio/)

cross-posted from [!softwareengineering@group.lt](https://group.lt/c/softwareengineering): https://group.lt/post/46120 - Google: [AppSheet](https://appsheet.com/) - Apple: [SwiftUI](https://developer.apple.com/xcode/swiftui/) - Microsoft: [PowerApps](https://powerapps.microsoft.com) - Amazon: [HoneyCode](https://www.honeycode.aws/), [Amplify Studio](https://aws.amazon.com/amplify/studio/)

- Google: [AppSheet](https://appsheet.com/) - Apple: [SwiftUI](https://developer.apple.com/xcode/swiftui/) - Microsoft: [PowerApps](https://powerapps.microsoft.com) - Amazon: [HoneyCode](https://www.honeycode.aws/), [Amplify Studio](https://aws.amazon.com/amplify/studio/)

The two paradigms of software development research
> ## Highlights > > - Software development research is divided into two incommensurable paradigms. > - The **Rational Paradigm** emphasizes problem solving, planning and methods. > - The **Empirical Paradigm** emphasizes problem framing, improvisation and practices. > - The Empirical Paradigm is based on data and science; the Rational Paradigm is based on assumptions and opinions. > - The Rational Paradigm undermines the credibility of the software engineering research community. --- Very good paper by @paulralph@mastodon.acm.org discussing Rational Paradigm (non emprirical) and Empiriral Paradigm (evidence-based, scientific) in software engineering. Historically the Rational Paradigm has dominated both the software engineering research and industry, which is also evident in software engineering international standards, bodies of knowledge (e.g. IEEE CS SWEBOK), curriculum guidelines, ... Basically, much of the "standard" knowledge and mainstream literature has no basis in science, but "guru" knowledge. But people rarely follow rational approaches successfully or faithfully, which suggest using detailed plans, ... It also argues that currently software engineering is at level 2 in a "informal scale of empirical commitment". In comparison, medicine is at level 4 (greatest level in empirical commitment). ![informal scale of empirical commitment](https://group.lt/pictrs/image/54e3e1d0-c8c4-4dbc-be02-a137ff472026.png) > I think SE is at level two. Most top venues expect empirical data; however, that data often does not directly address effectiveness. Empirical findings and rigorous studies compete with non-empirical concepts and anecdotal evidence. For example, some reviews of a recent paper on software development waste [168] criticized it for its limited contribution over previous work [169], even though the previous work was based entirely on anecdotal evidence and the new paper was based on a rigorous empirical study. Meanwhile, many specialist and second-tier venues do not require empirical data at all. And concludes with some implications > 1. Much research involves developing new and improved development methods, tools, models, standards and techniques. Researchers who are unwittingly immersed in the Rational Paradigm may create artifacts based on unstated Rational-Paradigm assumptions, limiting their applicability and usefulness. For instance, the project management framework PRINCE2 prescribes that the project board (who set project goals) should not be the same people as project team (who design the system [108]). This is based on the Rationalist assumption that problems are given, and inhibits design coevolution. > > 2. Having two paradigms in the same academic community causes miscommunication [4], which undermines consensus and hinders scientific progress [171]. The fundamental rationalist critique of the Empirical Paradigm is that it is patently obvious that employing a more systematic, methodical, logical process should improve outcomes [7], [23], [119], [172], [173]. The fundamental empiricist critique of the Rational Paradigm is that there is no convincing evidence that following more systematic, methodical, logical processes is helpful or even possible [3], [5], [9], [12]. As the Rational Paradigm is grounded in Rationalist epistemology, its adherents are skeptical of empirical evidence [23]; similarly, as the Empirical Paradigm is grounded in empiricist epistemology, its adherents are skeptical of appeals to intuition and common sense [5]. In other words, scholars in different paradigms talk past each other and struggle to communicate or find common ground. > > 3. Many reasonable professionals, who would never buy a homeopathic remedy (because a few testimonials obviously do not constitute sound evidence of effectiveness) will adopt a software method or practice based on nothing other than a few testimonials [174], [175]. Both practitioners and researchers should demand direct empirical evaluation of the effectiveness of all proposed methods, tools, models, standards and techniques (cf. [111], [176]). When someone argues that basic standards of evidence should not apply to their research, call this what it is: the special pleading fallacy [177]. Meanwhile, peer reviewers should avoid criticizing or rejecting empirical work for contradicting non-empirical legacy concepts. > > 4. The Rational Paradigm leads professionals “to demand up-front statements of design requirements” and “to make contracts with one another on [this] basis”, increasing risk [5]. The Empirical Paradigm reveals why: as the goals and desiderata coevolve with the emerging software product, many projects drift away from their contracts. This drift creates a paradox for the developers: deliver exactly what the contract says for limited stakeholder benefits (and possible harms), or maximize stakeholder benefits and risk breach-of-contract litigation. Firms should therefore consider alternative arrangements including in-house development or ongoing contracts. > > 5. The Rational Paradigm contributes to the well-known tension between managers attempting to drive projects through cost estimates and software professionals who cannot accurately estimate costs [88]. Developers underestimate effort by 30–40% on average [178] as they rarely have sufficient information to gauge project difficulty [18]. The Empirical Paradigm reveals that design is an unpredictable, creative process, for which accounting-based control is ineffective. > > 6. Rational Paradigm assumptions permeate IS2010 [70] and SE2014 [179], the undergraduate model curricula for information systems and software engineering, respectively. Both curricula discuss requirements and lifecycles in depth; neither mention Reflection-in-Action, coevolution, amethodical development or any theories of SE or design (cf. [180]). Nonempirical legacy concepts including the Waterfall Model and Project Triangle should be dropped from curricula to make room for evidenced-based concepts, models and theories, just like in all of the other social and applied sciences. --- > ## Abstract > The most profound conflict in software engineering is not between positivist and interpretivist research approaches or Agile and Heavyweight software development methods, but between the Rational and Empirical Design Paradigms. The Rational and Empirical Paradigms are disparate constellations of beliefs about how software is and should be created. The Rational Paradigm remains dominant in software engineering research, standards and curricula despite being contradicted by decades of empirical research. The Rational Paradigm views analysis, design and programming as separate activities despite empirical research showing that they are simultaneous and inextricably interconnected. The Rational Paradigm views developers as executing plans despite empirical research showing that plans are a weak resource for informing situated action. The Rational Paradigm views success in terms of the Project Triangle (scope, time, cost and quality) despite empirical researching showing that the Project Triangle omits critical dimensions of success. The Rational Paradigm assumes that analysts elicit requirements despite empirical research showing that analysts and stakeholders co-construct preferences. The Rational Paradigm views professionals as using software development methods despite empirical research showing that methods are rarely used, very rarely used as intended, and typically weak resources for informing situated action. This article therefore elucidates the Empirical Design Paradigm, an alternative view of software development more consistent with empirical evidence. Embracing the Empirical Paradigm is crucial for retaining scientific legitimacy, solving numerous practical problems and improving software engineering education.

Oh I misread; thought it enabled following fediverse users from within lemmy, but now i see it is actually the other way around. Thank you for clarifying!


Lemmy users can now be followed. Just visit a user profile from another platform like Mastodon, and click the follow button, then you will receive new posts and comments in the timeline.

does an admin needs to enable the follow button? it is not appearing for me.


Wonderful! Thanks contributors for all the work!


I know right? JWST is giving us a pretty exciting moment in astronomy. Also many beautiful images🌠


cross-posted from: https://group.lt/post/46053 > A group of astronomers poring over data from the James Webb Space Telescope (JWST) has glimpsed light from ionized helium in a distant galaxy, which could indicate the presence of the **universe’s very first generation of stars**. > > These long-sought, inaptly named “Population III” stars would have been ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas. Theorists started imagining these first fireballs in the 1970s, hypothesizing that, after short lifetimes, they exploded as supernovas, forging heavier elements and spewing them into the cosmos. That star stuff later gave rise to Population II stars more abundant in heavy elements, then even richer Population I stars like our sun, as well as planets, asteroids, comets and eventually life itself. > > About 400,000 years after the Big Bang, electrons, protons and neutrons settled down enough to combine into hydrogen and helium atoms. As the temperature kept dropping, dark matter gradually clumped up, pulling the atoms with it. Inside the clumps, hydrogen and helium were squashed by gravity, condensing into enormous balls of gas until, once the balls were dense enough, nuclear fusion suddenly ignited in their centers. The first stars were born. > > stars in our galaxy into types I and II in 1944. The former includes our sun and other metal-rich stars; the latter contains older stars made of lighter elements. The idea of Population III stars entered the literature decades later... Their heat or explosions could have reionized the universe ![A color-composite NIRCam image of the RXJ2129 galaxy cluster.](https://group.lt/pictrs/image/ce384588-fce1-4851-a6c6-3a08219548ab.png) More information: - https://arxiv.org/abs/2212.04476

Not the package managers as I understand, but the service providers providing the applications; so it would include e.g. everyone hosting package archive mirrors. This all makes no sense, because the Internet, which runs Linux, would basically stagnate.


Article 6 of the law requires all “software application stores” to:

  • Assess whether each service provided by each software application enables human-to-human communication
  • Verify whether each user is over or under the age of 17
  • Prevent users under 17 from installing such communication software

It may seem unbelievable that the authors of the law didn’t think about this but it is not that surprising considering this is just one of the many gigantic consequences of this sloppily thought out and written law.

That law is a big document; would have been helpful if Mullvad’s article directly cited/referenced as for us to verify some of that.


> A group of astronomers poring over data from the James Webb Space Telescope (JWST) has glimpsed light from ionized helium in a distant galaxy, which could indicate the presence of the **universe’s very first generation of stars**. > > These long-sought, inaptly named “Population III” stars would have been ginormous balls of hydrogen and helium sculpted from the universe’s primordial gas. Theorists started imagining these first fireballs in the 1970s, hypothesizing that, after short lifetimes, they exploded as supernovas, forging heavier elements and spewing them into the cosmos. That star stuff later gave rise to Population II stars more abundant in heavy elements, then even richer Population I stars like our sun, as well as planets, asteroids, comets and eventually life itself. > > About 400,000 years after the Big Bang, electrons, protons and neutrons settled down enough to combine into hydrogen and helium atoms. As the temperature kept dropping, dark matter gradually clumped up, pulling the atoms with it. Inside the clumps, hydrogen and helium were squashed by gravity, condensing into enormous balls of gas until, once the balls were dense enough, nuclear fusion suddenly ignited in their centers. The first stars were born. > > stars in our galaxy into types I and II in 1944. The former includes our sun and other metal-rich stars; the latter contains older stars made of lighter elements. The idea of Population III stars entered the literature decades later... Their heat or explosions could have reionized the universe ![A color-composite NIRCam image of the RXJ2129 galaxy cluster.](https://group.lt/pictrs/image/ce384588-fce1-4851-a6c6-3a08219548ab.png) More information: - https://arxiv.org/abs/2212.04476

Yeah Lemmy is pretty good on that and overall as well. I wish more people would move from the popular proprietary/centralized forums alike to here. Maybe it just needs more word of mouth…


Nice to see you and your project here as well✨✨✨

It is pretty useful! Thanks!

PS: Also worth sharing on !nixos@lemmy.ml


@seresearchers@a.gup.pe a software engineering researchers group on Mastodon (Guppe Groups)
There are people/researchers from ACM and so on sharing pretty interesting, useful content about software engineering.

Wow the rendering is much better/faster now 🌠


I don’t know about language models in specific. I read this recently on “federated learning” https://venturebeat.com/ai/federated-learning-key-to-securing-ai/

It says data privacy issues. Maybe it is also a more complex architecture.


cross-posted from: https://group.lt/post/44860 > Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports. > >The US nonprofit, which is known for testing consumer products, **asked what steps can be taken to help usher in "memory safe" languages, like Rust**, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue.  > >The [report](https://advocacy.consumerreports.org/research/report-future-of-memory-safety/), Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability. More information: - https://advocacy.consumerreports.org/research/report-future-of-memory-safety/ - https://advocacy.consumerreports.org/wp-content/uploads/2023/01/Memory-Safety-Convening-Report.pdf

cross-posted from: https://group.lt/post/44860 > Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports. > >The US nonprofit, which is known for testing consumer products, **asked what steps can be taken to help usher in "memory safe" languages, like Rust**, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue.  > >The [report](https://advocacy.consumerreports.org/research/report-future-of-memory-safety/), Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability. More information: - https://advocacy.consumerreports.org/research/report-future-of-memory-safety/ - https://advocacy.consumerreports.org/wp-content/uploads/2023/01/Memory-Safety-Convening-Report.pdf

> Developers across government and industry should commit to using memory safe languages for new products and tools, and identify the most critical libraries and packages to shift to memory safe languages, according to a study from Consumer Reports. > >The US nonprofit, which is known for testing consumer products, **asked what steps can be taken to help usher in "memory safe" languages, like Rust**, over options such as C and C++. Consumer Reports said it wanted to address "industry-wide threats that cannot be solved through user behavior or even consumer choice" and it identified "memory unsafety" as one such issue.  > >The [report](https://advocacy.consumerreports.org/research/report-future-of-memory-safety/), Future of Memory Safety, looks at range of issues, including challenges in building memory safe language adoption within universities, levels of distrust for memory safe languages, introducing memory safe languages to code bases written in other languages, and also incentives and public accountability. More information: - https://advocacy.consumerreports.org/research/report-future-of-memory-safety/ - https://advocacy.consumerreports.org/wp-content/uploads/2023/01/Memory-Safety-Convening-Report.pdf

The hardest scaling issue by Codeberg (a nonprofit, free software platform/service for code hosting)
cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632 > This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > **The hardest scaling issue is: scaling human power.** > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems... TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

The hardest scaling issue by Codeberg (a nonprofit, free software platform/service for code hosting)
cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632 > This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > **The hardest scaling issue is: scaling human power.** > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems... TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

The hardest scaling issue by Codeberg (a nonprofit, free software platform/service for code hosting)
cross-posted from c/softwareengineering@group.lt: https://group.lt/post/44632 > This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > **The hardest scaling issue is: scaling human power.** > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems... TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.

The hardest scaling issue by Codeberg (a nonprofit, free software platform/service for code hosting)
> This kind of scaling issue is new to Codeberg (a nonprofit free software project), but not to the world. All projects on earth likely went through this at a certain point or will experience it in the future. > > When people like me talk about scaling... It's about increasing computing power, distributed storage, replicated databases and so on. There are all kinds of technology available to solve scaling issues. So why, damn, is Codeberg still having performance issues from time to time? > > ...we face the "worst" kind of scaling issue in my perception. That is, if you don't see it coming (e.g. because the software gets slower day by day, or because you see how the storage pool fill up). Instead, it appears out of the blue. > > **The hardest scaling issue is: scaling human power.** > > Configuration, Investigation, Maintenance, User Support, Communication – all require some effort, and it's not easy to automate. In many cases, automation would consume even more human resources to set up than we have. > > There are no paid night shifts, not even payment at all. Still, people have become used to the always-available guarantees, and demand the same from us: Occasional slowness in the evening of the CET timezone? Unbearable! > >I do understand the demand. We definitely aim for a better service than we sometimes provide. However, sometimes, the frustration of angry social-media-guys carries me away... > > two primary blockers that prevent scaling human resources. The first one is: trust. Because we can't yet afford hiring employees that work on tasks for a defined amount of time, work naturally has to be distributed over many volunteers with limited time commitment... second problem is a in part technical. Unlike major players, which have nearly unlimited resources available to meet high demand, scaling Codeberg's systems... TLDR: sustainability issues for scaling because Codeberg is a nonprofit with much limited resources, mainly human resources, in face of high demand. Non-paid volunteers do all the work. So needs more people working as volunteers, and needs more money.


> How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action? > > Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030. For students. Mostly interesting for promoting the sustainable goals.

> We present the 10-most visited posts of the previous year. This year’s list of top 10 posts highlights our work in **deepfakes, artificial intelligence, machine learning, DevSecOps, and zero trust**. 1. https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/ 1. https://insights.sei.cmu.edu/blog/what-is-explainable-ai/ 1. https://insights.sei.cmu.edu/blog/a-technical-devsecops-adoption-framework/ 1. https://insights.sei.cmu.edu/blog/a-hitchhikers-guide-to-ml-training-infrastructure/ 1. https://insights.sei.cmu.edu/blog/a-case-study-in-applying-digital-engineering/ 1. https://insights.sei.cmu.edu/blog/two-categories-of-architecture-patterns-for-deployability/ 1. https://insights.sei.cmu.edu/blog/the-zero-trust-journey-4-phases-of-implementation/ 1. https://insights.sei.cmu.edu/blog/tactics-and-patterns-for-software-robustness/ 1. https://insights.sei.cmu.edu/blog/containerization-at-the-edge/ 1. https://insights.sei.cmu.edu/blog/probably-dont-rely-on-epss-yet/

> While the memory safety and security features of the Rust programming language can be effective in many situations, Rust’s compiler is very particular on what constitutes good software design practices. Whenever design assumptions disagree with real-world data and assumptions, there is the possibility of security vulnerabilities–and malicious software that can take advantage of those vulnerabilities. In this post, we will focus on users of Rust programs, rather than Rust developers. We will explore some tools for understanding vulnerabilities whether the original source code is available or not. These tools are important for understanding malicious software where source code is often unavailable, as well as commenting on possible directions in which tools and automated code analysis can improve. We also comment on the maturity of the Rust software ecosystem as a whole and how that might impact future security responses, including via the coordinated vulnerability disclosure methods advocated by the SEI’s CERT Coordination Center (CERT/CC). ![Programming Languages Maturity](https://group.lt/pictrs/image/1b49ce7d-ea9f-43e1-a956-6924b9816017.png)

> Rust is a programming language that is growing in popularity. While its user base remains small, it is widely regarded as a cool language. According to the Stack Overflow Developer Survey 2022, Rust has been the most-loved language for seven straight years. Rust boasts a unique security model, which promises memory safety and concurrency safety, while providing the performance of C/C++. Being a young language, it has not been subjected to the widespread scrutiny afforded to older languages, such as Java. Consequently, in this blog post, we would like to assess Rust’s security promises. ![Rust Protection in Context, table](https://group.lt/pictrs/image/13ba6ce2-e8cd-4228-aa7a-37cab1cd10cb.png)

There is Coulouris’ “Distributed Systems: Concepts and Design” which provides practical examples using e.g. Java. I remember then using Clojure (which runs on the JVM and can interop Java) and asked/listed about resources on there https://clojureverse.org/t/rersources-on-building-distributed-systems-using-clojure/9045

Also Lamport’s paper on clocks listed on there is nice classical read, there are videos about it on YouTube. Worth taking a look at Erlang/Elixir and the BEAM VM.


The Engineers Are Bloggers Now
> Cristian Velazquez, a staff site reliability engineer at Uber, helped fix an important issue for the company's software in 2021. Then Uber asked him to write about it on the company's engineering blog. His post has generated over 84,000 page views since it was published. > > Uber is one of several large companies hoping to reach engineers this way. Organizations like Google, Apple, and Meta are also in the blogging game. > > The sites combine glimpses into what life is like at a company with case studies about complex programming tasks. The posts tend to have the titles of grad school papers and the editorial flair of instruction manuals. They're often created to increase transparency, provide resources to the engineering community — and entice people to go work at these companies. Some companies' engineering feeds which I follow - https://developers.googleblog.com/feeds/posts/default - https://medium.com/feed/google-developer-experts - https://aws.amazon.com/blogs/aws/feed - https://developer.apple.com/news/rss/news.rss - https://engineering.fb.com/feed/ - https://azurecomcdn.azureedge.net/en-us/blog/feed/ - https://building.nubank.com.br/feed/ - https://www.uber.com/blog/engineering/rss - https://medium.com/feed/airbnb-engineering - https://medium.com/feed/pinterest-engineering

Combining Design Thinking with Lean Startup and Agile
> **Design Thinking** could have really helped to understand the problem customers were facing (They were looking to to study new concepts, but moreover discuss ideas with their peers in class so interactive group learning). > > **Lean Startup** would have helped to avoid the problem of building something people were not looking for (training without Powerpoint), > > and **Agile** could have helped to cut the dev cycle with 50% by just building iteratively. > >Gartner introduced a model in 2016 where they connected these three models. > > **Gartner: Combine Design Thinking, Lean Startup and Agile to Drive Digital Innovation** More information: - https://www.productpizza.com/combining-design-thinking-with-lean-startup-and-agile/ - https://www.gartner.com/en/documents/3941917 (2019) - https://www.gartner.com/en/documents/3200917 (2016)

>Expand your horizons by trying out 12 different programming languages in 2023. > >Go old-school with COBOL, cutting edge with Unison or esoteric with Prolog. Explore low-level code with Assembly, expressions with a Lisp or functional with Haskell! More information: - https://exercism.org/challenges/12in23 - https://forum.exercism.org/t/the-12in23-challenge/2213

> Stakeholders’ buy-in and support is an integral component of success for any UX project, as they translate into resources, bandwidth, and approval. However, navigating stakeholder dynamics requires a thoughtful mix of listening, collaboration, communication, influence, and negotiation. This balancing act leads to stakeholder engagement and ultimately creates successful, long-term relationships. > > Continuous communication with stakeholders is important for any UX project — first, because it helps them understand and appreciate what UX does and, second, because it helps UX learn about other essential aspects of the business. Despite this duality, the burden of communication usually falls on UX — because stakeholders are inherently busy and possibly focused on many other things besides UX.

Software Engineering Institute’s DevSecOps Platform-Independent Model (PIM), Version 2.1
> DevSecOps is an engineering practice that promotes collaboration among development, security, and operations. When implemented, it creates a socio-technical system that uses automation for flexible, rapid, frequent delivery of secure infrastructure and software to production. Software development organizations must tailor each DevSecOps pipeline to the people, processes, and technology needed to provide a product or service. Until recently, there was no consistent basis for managing software-intensive development, cybersecurity, and operations in distributed systems. > > Then in May, the SEI released version 1.0 of the DevSecOps PIM, a reusable reference architecture for DevSecOps pipelines. Software development organizations can use the online, interactive PIM as a reference architecture or assessment tool for their own DevSecOps pipelines. More information: - https://cmu-sei.github.io/DevSecOps-Model/ - https://insights.sei.cmu.edu/news/devsecops-platform-independent-model-receives-major-update/

> In 2022, the SEI hosted the AAAI Spring Symposium on AI Engineering alongside co-organizers from Duke University, SRI International, and MIT Lincoln Lab. The symposium focused on human-centered, scalable, and robust and secure AI, with the goal of further evolving the state of the art; gathering lessons learned, best practices, workforce development needs; and fostering critical relationships. The papers in this collection were presented at the symposium. More information: - https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=884163 - https://insights.sei.cmu.edu/news/symposium-delivers-new-perspectives-on-ai-engineering/ - https://resources.sei.cmu.edu/news-events/events/aaai/index.cfm


> We introduce the challenges of DevSecOps philosophy and its applicability to the development and operation of trustworthy infrastructure-as-code, and we combine the solutions into a single framework covering all crucial steps. Finally, we discuss how the proposed framework addresses the challenges and introduce an initial design for it.

> This special issue shows how the realm of infrastructure code has evolved to a status which—analyzed from a scientific perspective—can be considered mature, and rich in practices which can be seen as off-the-shelf approaches to continuous software engineering.

ACM SIGSOFT Towards Sustainable Software Business: 5th International Workshop on Software-Intensive Business
> Software producing organizations face the challenges of changing demands, rapidly evolving technology, and a dynamic ecosystem in which their products and services need to operate. These challenges hinder software organizations being sustainable. The 5th International Workshop on Software-Intensive Business (IWSiB) brought researchers and practitioners together to discuss contributions within the emerging field of sustainable software businesses. The workshop was hosted by the 44th International Conference for Software Engineering. Birgit Penzenstadler's keynote on software-intensive business supporting resilience and sustainability for people, sparked the interest of the participating 30 researchers that continued to discuss 12 submissions.

You’re a person of culture as well I see; I upvote comments of culture yes📠

I remember talking with you at the NixOS matrix; nice to see you here as well💖✨✨✨🌠


It is because it departs from POSIX that it is good; I recognize the syntax for some functionality is cumbersome and hard to remember though. There are similarities like command names and piping still…

I use NixOS and home-manager, so for switching I just

  home-manager.users.yuu = {
    programs.nushell = {
      package = pkgs-update.nushell;
      enable = true;
      configFile.source = ../../config/nushell/config.nu;
      envFile.source = ../../config/nushell/env.nu;  
    };
  };

The config.nu and env.nu is basically the default just with a customized prompt.

Then in my alacritty.ylm I set shell to the nu binary

shell:
  program: /etc/profiles/per-user/yuu/bin/nu

Also learned from official resources https://www.nushell.sh/book. When I have doubts, I ask either on Nushell’s GitHub discussions or https://matrix.to/#/#nushell:matrix.org

And to keep a POSIX shell

{
  environment = {
    systemPackages = with pkgs; [
      mksh
    ];

    sessionVariables = rec {
      TERM = "alacritty";
      TERMINAL = "alacritty";
      SHELL = "${pkgs.mksh}/bin/mksh";
    };

  environment.shells = [
    "${pkgs.mksh}/bin/mksh"
  ];
}

You can use Nix which works in many distros; it has the most packages of any package repository/collection

https://nixos.org/download.html#nix-install-linux

GNU Guix is similar, but not as much packages

https://guix.gnu.org/en/download/ https://guix.gnu.org/manual/en/html_node/Binary-Installation.html


I use nushell for my terminal/console (alacritty). For POSIX compability, mksh; I set it as SHELL so programs, which expect/assume POSIX, use it instead of nu. This is the way to have best of both worlds.


do we need to create an acc for every site, or just one? would join if it federates between communities


For nixos /etc/nixos/flake.nix. Example https://git.sr.ht/~misterio/nix-config/tree/main/item/flake.nix

For home-manager see https://nix-community.github.io/home-manager/index.html#ch-nix-flakes

For individual projects like that Pytorch one you can put on any git repo.

If you use same nixpkgs revision as the one you currently have using channels nix should not rebuild derivations.


ongoing new version of the software engineering body of knowledge

https://www.computer.org/volunteering/boards-and-committees/professional-educational-activities/software-engineering-committee/swebok-evolution

Besides outlining the profession/discipline, it also has many references to books for each knowledge area


nix flake update

And to add a new flake to flake.nix

inputs.my-flake.url = "github:owner/repo";

✨✨✨✨✨✨✨✨✨✨✨✨

Maybe there is a way to add flakes through the command line which I do not know of.


https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html

or man nix3-flake.

For a NixOS flake example: https://git.sr.ht/~misterio/nix-config/tree/main/item/flake.nix

For specific language examples https://github.com/NixOS/templates (which you can nix flake new my-project-name --template "templates#template-name". For real examples https://sourcegraph.com/search?q=context:global+.*+file:flake.nix+lang:Nix&patternType=regexp&sm=1

here a pytorch example when I was learning Flakes

# https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html
# https://discourse.nixos.org/t/pytorch-cuda-on-wsl/18267
# https://discourse.nixos.org/t/pytorch-and-cuda-torch-not-compiled-with-cuda-enabled/11272
# https://gitlab.com/abstract-binary/nix-nar-rs/-/blob/main/flake.nix
# https://github.com/hasktorch/libtorch-nix
# https://github.com/google-research/dex-lang/blob/main/flake.nix
# https://yuanwang.ca/posts/getting-started-with-flakes.html

{
  description = "PyTorch";

  # Specifies other flakes that this flake depends on.
  inputs = {
    devshell.url = "github:numtide/devshell";
    utils.url = "github:numtide/flake-utils";
    nixpkgs.url = "github:nixos/nixpkgs/nixos-22.11";
  };

  # Function that produces an attribute set.
  # Its function arguments are the flakes specified in inputs.
  # The self argument denotes this flake.
  outputs = inputs@{ self, nixpkgs, utils, ... }:
    (utils.lib.eachSystem [ "x86_64-linux" ] (system:
      let
        pkgs = (import nixpkgs {
          inherit system;
          config = {
            # For CUDA.
            allowUnfree = true;
            # Enables CUDA support in packages that support it.
            cudaSupport = true;
          };
        });
      in rec {
        # Executed by `nix build .#<name>`
        packages = utils.lib.flattenTree {
          hello = pkgs.hello;
        };

        # Executed by `nix build .`
        defaultPackage = packages.hello;
        # defaultPackage = pkgs.callPackage ./default.nix { };

        # Executed by `nix develop`
        devShell = with pkgs; mkShell {
          buildInputs = ([
            python39 # numba-0.54.1 not supported for interpreter python3.10
          ] ++ (with python39.pkgs; [
            inflect
            librosa
            pip
            pytorch-bin
            unidecode
          ]) ++ (with cudaPackages; [
            cudatoolkit
          ]));

          shellHook = ''
            export CUDA_PATH=${pkgs.cudatoolkit}
          '';
        };
      }
    ));
}

nix-channel works now and is a lot simpler

It is not. Once you understand flakes, you will see how much better it is. If you do not understand why flakes exist to begin with, read https://www.tweag.io/blog/2020-05-25-flakes/

also use in conjunction with flakes:

  • direnv, nix-direnv
  • devshell

Flakes are easier and better than channels. Use it instead.


Sparkles are the bestt✨✨✨✨


That will make you proficient on Emacs, which is a requirement in the long term. But the more experient you become at it, the more you will see its shortcomings and become desiluded with it. I hope text editor extensions, packages, would be written in a way that we could use them despite editor, something akin to LSP, …


Yes. Merge both or redirect one to another. Seems atemu is not active here, but I think I have saw someone by that name in some nix official channel like matrix or discourse or nixpkgs repository. You could ask Lemmy admin as well