soc.octade.net is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
social@octade.net

Search results for tag #code

[?]grobi » 🌐
@grobi@defcon.social

3ML: Framework for multi-wavelength/multi-messenger analysis

ThreeML is supported by National Science Foundation (NSF) nsf.gov/

FYI:
heasarc.gsfc.nasa.gov/xanadu/x

ui.adsabs.harvard.edu/abs/2015

arxiv.org/pdf/1507.08343

The Multi-Mission Maximum Likelihood framework (3ML) provides a common high-level interface and model definition for coherent and intuitive modeling of sources using all the available data, no matter their origin. Astrophysical sources are observed by different instruments at different wavelengths with an unprecedented quality, and each instrument and data type has its own ad-hoc software and handling procedure. 3ML's architecture is based on plug-ins; the package uses the official software of each instrument under the hood, thus guaranteeing that 3ML is always using the best possible methodology to deal with the data of each instrument. Though Maximum Likelihood is in the name for historical reasons, 3ML is an interface to several Bayesian inference algorithms such as MCMC and nested sampling as well as likelihood optimization algorithms.

Alt...The Multi-Mission Maximum Likelihood framework (3ML) provides a common high-level interface and model definition for coherent and intuitive modeling of sources using all the available data, no matter their origin. Astrophysical sources are observed by different instruments at different wavelengths with an unprecedented quality, and each instrument and data type has its own ad-hoc software and handling procedure. 3ML's architecture is based on plug-ins; the package uses the official software of each instrument under the hood, thus guaranteeing that 3ML is always using the best possible methodology to deal with the data of each instrument. Though Maximum Likelihood is in the name for historical reasons, 3ML is an interface to several Bayesian inference algorithms such as MCMC and nested sampling as well as likelihood optimization algorithms.

    [?]grobi » 🌐
    @grobi@defcon.social

    From technic960183

    spherimatch:
    A Python package for cross-matching and self-matching in spherical coordinates.

    spherimatch is a Python package for efficient cross-matching and self-matching of astronomical catalogs in spherical coordinates. Designed for use in astrophysics, where data is naturally distributed on the celestial sphere, the package enables fast matching with an algorithmic complexity of O(NlogN). It supports Friends-of-Friends (FoF) group identification and duplicate removal in spherical coordinates, and integrates easily with common data processing tools such as pandas.

    github.com/technic960183/spher

    technic960183.github.io/spheri
    technic960183.github.io/spheri

    pypi.org/project/fofpy/
    linuxtut.com/en/68a22081e84803

      [?]grobi » 🌐
      @grobi@defcon.social

      AutoWISP

      Kaloyan Penev, Angel Romero and S. Javad Jafarzadeh have developed a software pipeline, AutoWISP, for extracting high-precision photometry from citizen scientists' observations made with consumer-grade color digital cameras (digital single-lens reflex, or DSLR, cameras), based on their previously developed tool, AstroWISP. The new pipeline is designed to convert these observations, including color images, into high-precision light curves of stars.

      "We outline the individual steps of the pipeline and present a case study using a Sony-alpha 7R II DSLR camera, demonstrating sub-percent photometric precision, and highlighting the benefits of three-color photometry of stars. Project PANOPTES will adopt this photometric pipeline and, we hope, be used by citizen scientists worldwide. Our aim is for AutoWISP to pave the way for potentially transformative contributions from citizen scientists with access to observing equipment."

      Code site:
      + AutoWISP
      github.com/kpenev/AutoWISP
      + Documentation:
      kpenev.github.io/AutoWISP/

      + AstroWISP
      github.com/kpenev/AstroWISP
      pypi.org/project/astrowisp/
      + Documentation
      kpenev.github.io/AstroWISP/

      Briefly, the image processing pipeline steps and their products are shown. The arrows indicate the products of each step and where they will be used.:

1 Calibration
2 Source Extraction
3 Astrometry
4 Photometry
5 PSF Fitting
+ Aperture Photometry
6 PRF Fitting
7 Magnitude Fitting
8 Light Curve Generation
9 Post Processing

      Alt...Briefly, the image processing pipeline steps and their products are shown. The arrows indicate the products of each step and where they will be used.: 1 Calibration 2 Source Extraction 3 Astrometry 4 Photometry 5 PSF Fitting + Aperture Photometry 6 PRF Fitting 7 Magnitude Fitting 8 Light Curve Generation 9 Post Processing

      Shown is the source extraction versus catalogue projections of our astrometry step placed on top of the corresponding FITS image, where blue squares are the catalogues projected sources and red squares are the extracted sources from our astrometry

      Alt...Shown is the source extraction versus catalogue projections of our astrometry step placed on top of the corresponding FITS image, where blue squares are the catalogues projected sources and red squares are the extracted sources from our astrometry

      This is the resulting phase-folded lightcurve for WASP-33 b exoplanet transit, observed by Project PANOPTES (blue points and circles), TESS (red points), and theoretical light curve based on best known system parameters (green curve). The raw PANOPTES-DSLR measurements, originating from the 4 color channels of 4 cameras in Hawaii (Mauna Loa observatory) and California (Mt Wilson) are shown as blue points. The blue points are binned in time to create the blue circles and corresponding error bars. Note that the scatter in TESS points is not instrumental, but rather it is intrinsic variability in the host star, which is a member of the delta-Scuti class of variable stars.

      Alt...This is the resulting phase-folded lightcurve for WASP-33 b exoplanet transit, observed by Project PANOPTES (blue points and circles), TESS (red points), and theoretical light curve based on best known system parameters (green curve). The raw PANOPTES-DSLR measurements, originating from the 4 color channels of 4 cameras in Hawaii (Mauna Loa observatory) and California (Mt Wilson) are shown as blue points. The blue points are binned in time to create the blue circles and corresponding error bars. Note that the scatter in TESS points is not instrumental, but rather it is intrinsic variability in the host star, which is a member of the delta-Scuti class of variable stars.

      The scatter (median absolute deviation from the median) of the individual channel lightcurves of PANOPTES observations of a 10 × 15 degree field centered on FU Orionis, with each of their corresponding image colors (RGGB). We see that AutoWISP enables a few parts per thousand photometric precision per exposure even from images with Bayer masks, significantly outperforming prior efforts. Even individual color channels result in better than 1% photometry per 2 min exposure.

      Alt...The scatter (median absolute deviation from the median) of the individual channel lightcurves of PANOPTES observations of a 10 × 15 degree field centered on FU Orionis, with each of their corresponding image colors (RGGB). We see that AutoWISP enables a few parts per thousand photometric precision per exposure even from images with Bayer masks, significantly outperforming prior efforts. Even individual color channels result in better than 1% photometry per 2 min exposure.

        [?]grobi » 🌐
        @grobi@defcon.social

        Thanks to Sam Van Kooten
        github.com/svank

        wispr-analysis

        Shared tools for WISPR data analysis

        Some highlights

        plot_utils.py
        + plot_WISPR:
        Aims to be a versatile function that does the Right Thing for plotting WISPR images, with colorbar bounds that are adjusted for inner and outer FOV and for L2 or L3 images, a square-root-scaled colorbar, and WCS coordinate support
        + *_axis_dates: Helper util for labeling a temporal axis with dates.
        + plot_orbit:
        Reads a directory (or nested set of directories) of WISPR files and plots a diagram showing the orbital path of PSP and the locations where images were taken, like this:

        projections.py
        + reproject_to_radial: Proof-of-concept code for reprojecting data into a radial coordinate system (where each row of the output array is a radial line out from the Sun.

        data_cleaning.py
        + dust_streak_filter: Code for identifying debris streaks in the WISPR images
        + clean_fits_files: Function to batch-run dust_streak_filter on a directory of images.

        composites.py
        + gen_composite: Reprojects an inner- and outer-FOV image into a common coordinate system

        utils.py
        + to_timestamp: Parse a timestamp from a handful of formats, including the timestamps inside WISPR headers, or entire WISPR filenames. Returns a numerical timestamp.
        + collect_files: Walks a directory of WISPR files (or a directory of subdirectories of WISPR images), identifies all the WISPR images, sorts them, and separates them by inner and outer FOVs.
        + ignore_fits_warnings: Suppresses the warnings Astropy raises when reading WISPR FITS files or parsing WCS data.

        github.com/svank/wispr_analysis
        Documentation:
        svank.github.io/wispr_analysis/

        Some highlights

plot_utils.py
+ plot_WISPR: 
Aims to be a versatile function that does the Right Thing for plotting WISPR images, with colorbar bounds that are adjusted for inner and outer FOV and for L2 or L3 images, a square-root-scaled colorbar, and WCS coordinate support
    *_axis_dates: Helper util for labeling a temporal axis with dates.
+ plot_orbit: 
Reads a directory (or nested set of directories) of WISPR files and plots a diagram showing the orbital path of PSP and the locations where images were taken, like this: 

projections.py
+ reproject_to_radial: Proof-of-concept code for reprojecting data into a radial coordinate system (where each row of the output array is a radial line out from the Sun.

data_cleaning.py
+ dust_streak_filter: Code for identifying debris streaks in the WISPR images
    clean_fits_files: Function to batch-run dust_streak_filter on a directory of images.

composites.py
+ gen_composite: Reprojects an inner- and outer-FOV image into a common coordinate system

utils.py
+ to_timestamp: Parse a timestamp from a handful of formats, including the timestamps inside WISPR headers, or entire WISPR filenames. Returns a numerical timestamp.
+ collect_files: Walks a directory of WISPR files (or a directory of subdirectories of WISPR images), identifies all the WISPR images, sorts them, and separates them by inner and outer FOVs.
+ ignore_fits_warnings: Suppresses the warnings Astropy raises when reading WISPR FITS files or parsing WCS data.

        Alt...Some highlights plot_utils.py + plot_WISPR: Aims to be a versatile function that does the Right Thing for plotting WISPR images, with colorbar bounds that are adjusted for inner and outer FOV and for L2 or L3 images, a square-root-scaled colorbar, and WCS coordinate support *_axis_dates: Helper util for labeling a temporal axis with dates. + plot_orbit: Reads a directory (or nested set of directories) of WISPR files and plots a diagram showing the orbital path of PSP and the locations where images were taken, like this: projections.py + reproject_to_radial: Proof-of-concept code for reprojecting data into a radial coordinate system (where each row of the output array is a radial line out from the Sun. data_cleaning.py + dust_streak_filter: Code for identifying debris streaks in the WISPR images clean_fits_files: Function to batch-run dust_streak_filter on a directory of images. composites.py + gen_composite: Reprojects an inner- and outer-FOV image into a common coordinate system utils.py + to_timestamp: Parse a timestamp from a handful of formats, including the timestamps inside WISPR headers, or entire WISPR filenames. Returns a numerical timestamp. + collect_files: Walks a directory of WISPR files (or a directory of subdirectories of WISPR images), identifies all the WISPR images, sorts them, and separates them by inner and outer FOVs. + ignore_fits_warnings: Suppresses the warnings Astropy raises when reading WISPR FITS files or parsing WCS data.

          [?]grobi » 🌐
          @grobi@defcon.social

          2025-08-20

          Leandro Beraldo e Silva released four days ago:
          lberaldoesilva/tropygal version 0.1.4
          Entropy estimates and distribution functions for galactic dynamics

          tropygal is a pure-python package for entropy estimates in the context of galactic dynamics, but can be used in other contexts too. It also provides functions for analytical distribution functions and density of states for models that have analytical expressions.

          ** Acknowledgements
          Development of tropygal was supported by the following research grants:
          + NASA ATP awards 80NSSC20K0509 and 80NSSC24K0938
          + U.S. NSF AAG grant AST-2009122
          + STFC Ernest Rutherford fellowship (ST/X004066/1)
          + JSPS KAKENHI Grant Numbers JP24K07101, JP21K13965, and JP21H00053
          + CNPq (309723/2020-5)
          + Heising Simons Foundation grant # 2022-3927

          ** Funding agencies:
          + NASA ATP - NASA Astrophysical Theory Program (US)
          + NSF - National Science Foundation (US)
          + STFC - Science and Technology Facilities Council (UK)
          + JSPS - Japan Society for the Promotion of Science (Japan)
          + CNPq – Conselho Nacional de Desenvolvimento Científico e Tecnológico (Brasil)
          + Heising Simons Foundation (US)

          github.com/lberaldoesilva/trop
          tropygal.readthedocs.io/en/lat
          link.springer.com/epdf/10.1007
          mdpi.com/1099-4300/18/1/13

            [?]grobi » 🌐
            @grobi@defcon.social

            PIRATES
            (Polarimetric Image Reconstruction AI for Tracing Evolved Structures)
            uses machine learning to perform image reconstruction.

            It uses MCFOST to generate models, then uses those models to build, train, iteratively fit, and evaluate PIRATES performance.

            Optical interferometric image reconstruction is a challenging, ill-posed optimization problem which usually relies on heavy regularization for convergence. Conventional algorithms regularize in the pixel domain, without cognizance of spatial relationships or physical realism, with limited utility when this information is needed to reconstruct images. Here we present PIRATES (Polarimetric Image Reconstruction AI for Tracing Evolved Structures), the first image reconstruction algorithm for optical polarimetric interferometry. PIRATES has a dual structure optimized for parsimonious reconstruction of high fidelity polarized images and accurate reproduction of interferometric observables. The first stage, a convolutional neural network (CNN), learns a physically meaningful prior of self-consistent polarized scattering relationships from radiative transfer images. The second stage, an iterative fitting mechanism, uses the CNN as a prior for subsequent refinement of the images with respect to their polarized interferometric observables. Unlike the pixel-wise adjustments of traditional image reconstruction codes, PIRATES reconstructs images in a latent feature space, imparting a structurally derived implicit regularization.

            github.com/LucindaLilley/PIRAT
            ui.adsabs.harvard.edu/abs/2025
            arxiv.org/pdf/2505.11950

            CREDITS:
            Lilley, Lucinda ; Norris, Barnaby ; Tuthill, Peter ; Spalding, Eckhart ; Lucas, Miles ; Zhang, Manxuan ; Millar-Blanchaer, Maxwell ; Pinte, Christophe ; Bottom, Michael ; Guyon, Olivier ; Lozi, Julien ; Deo, Vincent ; Vievard, Sébastien ; Wong, Alison P. ; Ahn, Kyohoon ; Ashcraft, Jaren

            Optical interferometric image reconstruction is a challenging, ill-posed optimization problem which usually relies on heavy regularization for convergence. Conventional algorithms regularize in the pixel domain, without cognizance of spatial relationships or physical realism, with limited utility when this information is needed to reconstruct images. Here we present PIRATES (Polarimetric Image Reconstruction AI for Tracing Evolved Structures), the first image reconstruction algorithm for optical polarimetric interferometry. PIRATES has a dual structure optimized for parsimonious reconstruction of high fidelity polarized images and accurate reproduction of interferometric observables. The first stage, a convolutional neural network (CNN), learns a physically meaningful prior of self-consistent polarized scattering relationships from radiative transfer images. The second stage, an iterative fitting mechanism, uses the CNN as a prior for subsequent refinement of the images with respect to their polarized interferometric observables. Unlike the pixel-wise adjustments of traditional image reconstruction codes, PIRATES reconstructs images in a latent feature space, imparting a structurally derived implicit regularization. We demonstrate that PIRATES can reconstruct high fidelity polarized images of a broad range of complex circumstellar environments, in a physically meaningful and internally consistent manner, and that latent space regularization can effectively [..]

            Alt...Optical interferometric image reconstruction is a challenging, ill-posed optimization problem which usually relies on heavy regularization for convergence. Conventional algorithms regularize in the pixel domain, without cognizance of spatial relationships or physical realism, with limited utility when this information is needed to reconstruct images. Here we present PIRATES (Polarimetric Image Reconstruction AI for Tracing Evolved Structures), the first image reconstruction algorithm for optical polarimetric interferometry. PIRATES has a dual structure optimized for parsimonious reconstruction of high fidelity polarized images and accurate reproduction of interferometric observables. The first stage, a convolutional neural network (CNN), learns a physically meaningful prior of self-consistent polarized scattering relationships from radiative transfer images. The second stage, an iterative fitting mechanism, uses the CNN as a prior for subsequent refinement of the images with respect to their polarized interferometric observables. Unlike the pixel-wise adjustments of traditional image reconstruction codes, PIRATES reconstructs images in a latent feature space, imparting a structurally derived implicit regularization. We demonstrate that PIRATES can reconstruct high fidelity polarized images of a broad range of complex circumstellar environments, in a physically meaningful and internally consistent manner, and that latent space regularization can effectively [..]

            The performance of PIRATES with signal to noise consistent with recent VAMPIRES NRM data, with no
algorithmic treatment to constrain the influence of noise. Visibilities and closure phases with injected noise and corre-
sponding error bars are plotted in columns 1 and 2. Stage 1 (CNN) does a good job of a moderate resolution and low
noise reconstruction of the ground truth (top row, columns 3-5), however, significant amounts of noise are introduced
into the images during the iterative fitting (middle row, columns 3-5). The ground truth images are displayed in the
bottom row, columns 3-5.

            Alt...The performance of PIRATES with signal to noise consistent with recent VAMPIRES NRM data, with no algorithmic treatment to constrain the influence of noise. Visibilities and closure phases with injected noise and corre- sponding error bars are plotted in columns 1 and 2. Stage 1 (CNN) does a good job of a moderate resolution and low noise reconstruction of the ground truth (top row, columns 3-5), however, significant amounts of noise are introduced into the images during the iterative fitting (middle row, columns 3-5). The ground truth images are displayed in the bottom row, columns 3-5.

              [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
              @kubikpixel@chaos.social

              0 ★ 7 ↺
              Wilson boosted

              [?]OCTADE » 🌐
              @octade@soc.octade.net

              "Remember that there is a distinction between a programming language and a graphical user interface. Don't confuse snazzy graphics (generated using someone else's libraries and tools) with good programming."
              ~ Bjarne Stroustrup (C++ Inventor)

              @infostorm@a.gup.pe @hacking@a.gup.pe @c@a.gup.pe @programming@a.gup.pe @dev@a.gup.pe @quotes@a.gup.pe