• 0 Posts
  • 20 Comments
Joined 8 months ago
cake
Cake day: October 31st, 2023

help-circle





  • It’s not a literature review. It’s a case report on a specific patient. It’s impossible to imagine writing a discussion of your own patient in this way, or to accept an approximately 5 page article without reading it.

    The journal Radiology Case Reports is refereed by an editorial board led by University of Washington professors, associate professors, and doctors of medicine.

    Radiology Case Reports is an open-access journal publishing exclusively case reports that feature diagnostic imaging. Categories in which case reports can be placed include the musculoskeletal system, spine, central nervous system, head and neck, cardiovascular, chest, gastrointestinal, genitourinary, multisystem, pediatric, emergency, women’s imaging, oncologic, normal variants, medical devices, foreign bodies, interventional radiology, nuclear medicine, molecular imaging, ultrasonography, imaging artifacts, forensic, anthropological, and medical-legal. Articles must be well-documented and include a review of the appropriate literature.

    $550 - Article publishing charge for open access

    10 days - Time to first decision

    18 days - Review time

    19 days - Submission to acceptance

    80% - Acceptance rate







  • hissing meerkat@sh.itjust.workstoScience Memes@mander.xyzhmmmm
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 months ago

    Only the real cardinallity must. The integer cardinallity could have them spaced out enough that they won’t collapse.

    For you to do this trolley problem you’d need to be outside the real track black hole so the question becomes: do you let a trolley go into a black hole or do you switch it to an infinite track that kills an infinite number of people?

    Edit: in which case the black hole must be infinitely far away and you don’t even know about it. So: do you pull the switch to cause a trolley to start killing a seemingly infinite number of people? Which based on the other replies in this thread the answer is a resounding “yes”





  • Whether or not you use downvotes doesn’t really matter.

    If what you like is well represented by the Boba drinkers and the Boba drinkers disproportionally don’t like Cofee then Cofee will be disproportionally excluded from the top of your results. Unless you explore deeper the Cofee results will be pushed to the bottom of your results. And any that happen to come to the top will have arrived there from broad appeal and will have very little contribution to thinking you like Cofee.

    If you don’t let the math effectively push things away that are disliked by the people who like similar things as you then everything will saturate at maximum appeal and the whole system does nothing.


  • There’s two problems. The first is that those other things you might like will be rated lower than things you appear to certainly like. That’s the “easy” problem and has solutions where a learning agent is forced to prefer exploring new options over sticking to preferences to some degree, but becomes difficult when you no longer know what is explored or unexplored due to some abstraction like dimension reduction or some practical limitation like a human can’t explore all of Lemmy like a robot in a maze.

    The second is that you might have preferences that other people who like the same things you’ve already indicated a taste for tend to dislike. For example there may be other people who like both Boba and Cofee but people who like one or the other tend to dislike the other. If you happen to encounter Boba first then Cofee will be predicted to be disliked based on the overall preferences of people who agree with your Boba preference.


  • No, not as simply as that. That’s the basic idea of recommendation systems that were common in the 1990s. The algorithm requires a tremendous amount of dimensionality reduction to work at scale. In that simple description it would need a trillion weights to compare the preferences of a million users to a million other users. If you reduce it to some standard 100-1000ish dimensions of preference it becomes feasible, but at the low end only contains about as much information as your own choices about subscribed to or blocked communities (obviously it has a much lower barrier of entry).

    There’s another important aspect of learning that the simple description leaves out, which is exploration. It will quickly start showing you things you reliably like, but won’t experiment with things it doesn’t know you’d like or not to find out.