Category: Blog

  • epanetReader

    Build Status Coverage Status CRAN RStudio mirror downloads CRAN version

    epanetReader

    epanetReader is an R package for reading water network simulation data in Epanet’s .inp and .rpt formats into R. Some basic summary information and plots are also provided.

    Epanet is a highly popular tool for water network simulation. But, it can be difficult to access network information for subsequent analysis and visualization. This is a real strength of R however, and there many tools already existing in R to support analysis and visualization.

    In addition to this README page, information about epanetReader is available from Environmental Modelling & Software (pdf) and ASCE Conference Proceedings (pdf).

    Installation

    • the latest released version: install.packages(“epanetReader”)
    • the development version: devtools::install_github(“bradleyjeck/epanetReader”)

    Getting Started

    Network files

    Read network information from an .inp file with a similar syntax as the popular read.table or read.csv functions. Note that the example network one that ships with Epanet causes a warning. The warning is just a reminder of how R deals with integer IDs.

    library(epanetReader)
    n1 <- read.inp("Net1.inp") 
    ## Warning in PATTERNS(allLines): patterns have integer IDs, see ?epanet.inp
    
    ## Warning in CURVES(allLines): curves have integer IDs, see ?epanet.inp
    

    Retrieve summary information about the network.

    summary(n1)
    ## EPANET Example Network 1
    ## A simple example of modeling chlorine decay. Both bulk and
    ## wall reactions are included.
    ## 
    ##             Number
    ## Junctions        9
    ## Tanks            1
    ## Reservoirs       1
    ## Pipes           12
    ## Pumps            1
    ## Quality         11
    ## Coordinates     11
    ## Labels           3
    

    A basic network plot is also available

    plot(n1)

    Net 1 plot

    The read.inp function returns an object with structure similar to the .inp file itself. A section in the .inp file corresponds to a named entry in the list. These entries are accessed using the $ syntax of R.

    names(n1)
    ##  [1] "Title"       "Junctions"   "Tanks"       "Reservoirs"  "Pipes"      
    ##  [6] "Pumps"       "Valves"      "Demands"     "Patterns"    "Curves"     
    ## [11] "Controls"    "Rules"       "Energy"      "Status"      "Emitters"   
    ## [16] "Quality"     "Sources"     "Reactions"   "Mixing"      "Times"      
    ## [21] "Report"      "Options"     "Coordinates" "Vertices"    "Labels"     
    ## [26] "Backdrop"    "Tags"
    

    Sections of the .inp file are stored as a data.frame or character vector. For example, the junction table is stored as a data.frame and retrieved as follows. In this case patterns were not specified in the junction table and so are marked NA.

    n1$Junctions
    ##   ID Elevation Demand Pattern
    ## 1 10       710      0      NA
    ## 2 11       710    150      NA
    ## 3 12       700    150      NA
    ## 4 13       695    100      NA
    ## 5 21       700    150      NA
    ## 6 22       695    200      NA
    ## 7 23       690    150      NA
    ## 8 31       700    100      NA
    ## 9 32       710    100      NA
    

    A summary of the junction table shows that Net1.inp has nine junctions with elevations ranging from 690 to 710 and demands ranging from 0 to 200. Note that the node ID is stored as a character rather than an integer or factor.

    summary(n1$Junctions)
    ##       ID              Elevation         Demand      Pattern       
    ##  Length:9           Min.   :690.0   Min.   :  0.0   Mode:logical  
    ##  Class :character   1st Qu.:695.0   1st Qu.:100.0   NA's:9        
    ##  Mode  :character   Median :700.0   Median :150.0                 
    ##                     Mean   :701.1   Mean   :122.2                 
    ##                     3rd Qu.:710.0   3rd Qu.:150.0                 
    ##                     Max.   :710.0   Max.   :200.0
    

    Epanet Simulation Results

    Results of the network simulation specified in Net.inp may be stored in Net1.rpt by running Epanet from the command line. Note that the report section of the .inp file should contain the following lines in order to generate output readable by this package.

    [REPORT]
    Page 0
    Links All
    Nodes All

    On windows, calling the epanet executable epanet2d runs the simulation.

    >epanet2d Net1.inp Net1.rpt 
    
    ... EPANET Version 2.0
    
      o Retrieving network data
      o Computing hydraulics 
      o Computing water quality
      o Writing output report to Net1.rpt
    
    ... EPANET completed.
    

    The .rpt file generated by Epanet may be read into R using read.rpt(). The simulation is summarized over junctions, tanks and pipes.

    n1r <- read.rpt("Net1.rpt") 
    summary(n1r)
    ## Contains node results for  25 time steps 
    ## 
    ## Summary of Junction Results: 
    ##      Demand         Pressure        Chlorine     
    ##  Min.   :  0.0   Min.   :106.8   Min.   :0.1500  
    ##  1st Qu.: 80.0   1st Qu.:116.1   1st Qu.:0.3500  
    ##  Median :120.0   Median :119.8   Median :0.5100  
    ##  Mean   :122.2   Mean   :119.6   Mean   :0.5434  
    ##  3rd Qu.:160.0   3rd Qu.:123.0   3rd Qu.:0.7400  
    ##  Max.   :320.0   Max.   :133.9   Max.   :1.0000  
    ## 
    ## Summary of Tank Results:
    ##      Demand             Pressure        Chlorine    
    ##  Min.   :-1100.000   Min.   :48.22   Min.   :0.590  
    ##  1st Qu.: -660.000   1st Qu.:52.00   1st Qu.:0.660  
    ##  Median :  258.000   Median :55.52   Median :0.750  
    ##  Mean   :   -5.741   Mean   :54.86   Mean   :0.764  
    ##  3rd Qu.:  505.380   3rd Qu.:57.54   3rd Qu.:0.850  
    ##  Max.   : 1029.420   Max.   :60.04   Max.   :1.000  
    ## 
    ## Contains link results for  25 time steps 
    ## 
    ## Summary of Pipe Results:
    ##       Flow             Velocity         Headloss    
    ##  Min.   :-1029.42   Min.   :0.0000   Min.   :0.000  
    ##  1st Qu.:   41.37   1st Qu.:0.3475   1st Qu.:0.110  
    ##  Median :  113.08   Median :0.5700   Median :0.300  
    ##  Mean   :  245.35   Mean   :0.8070   Mean   :0.644  
    ##  3rd Qu.:  237.23   3rd Qu.:1.0075   3rd Qu.:0.755  
    ##  Max.   : 1909.42   Max.   :2.7300   Max.   :3.210  
    ## 
    ## Energy Usage:
    ##   Pump usageFactor avgEfficiency kWh_per_Mgal avg_kW peak_kW dailyCost
    ## 1    9       57.71            75       880.42  96.25   96.71         0
    

    The default plot of simulation results is a map for time period 00:00:00. Note that the object created from the .inp file is a required argument to make the plot.

    plot( n1r, n1)

    Net 1 rpt plot

    In contrast to the treatment of .inp files described above, data from .rpt files is stored using a slightly different structure than the .rpt file. The function returns an object (list) with a data.frame for node results and data.frame for link results. These two data frames contain results from all the time periods. This storage choice was made to facilitate time series plots.

    Entries in the epanet.rpt object (list) created by read.rpt() are found using the names() function.

    names(n1r)
    ## [1] "nodeResults" "linkResults" "energyUsage"
    

    Results for a chosen time period can be retrieved using the subset function.

    subset(n1r$nodeResults, Timestamp == "0:00:00")
    ##    ID   Demand    Head Pressure Chlorine      note Timestamp timeInSeconds
    ## 1  10     0.00 1004.35   127.54      0.5             0:00:00             0
    ## 2  11   150.00  985.23   119.26      0.5             0:00:00             0
    ## 3  12   150.00  970.07   117.02      0.5             0:00:00             0
    ## 4  13   100.00  968.87   118.67      0.5             0:00:00             0
    ## 5  21   150.00  971.55   117.66      0.5             0:00:00             0
    ## 6  22   200.00  969.08   118.76      0.5             0:00:00             0
    ## 7  23   150.00  968.65   120.74      0.5             0:00:00             0
    ## 8  31   100.00  967.39   115.86      0.5             0:00:00             0
    ## 9  32   100.00  965.69   110.79      0.5             0:00:00             0
    ## 10  9 -1866.18  800.00     0.00      1.0 Reservoir   0:00:00             0
    ## 11  2   766.18  970.00    52.00      1.0      Tank   0:00:00             0
    ##     nodeType
    ## 1   Junction
    ## 2   Junction
    ## 3   Junction
    ## 4   Junction
    ## 5   Junction
    ## 6   Junction
    ## 7   Junction
    ## 8   Junction
    ## 9   Junction
    ## 10 Reservoir
    ## 11      Tank
    

    A comparison with the corresponding entry of the .rpt file, shown below for reference, shows that four columns have been added to the table. These pieces of extra info make visualizing the results easier.

      Node Results at 0:00:00 hrs:
      --------------------------------------------------------
                         Demand      Head  Pressure  Chlorine
      Node                  gpm        ft       psi      mg/L
      --------------------------------------------------------
      10                   0.00   1004.35    127.54      0.50
      11                 150.00    985.23    119.26      0.50
      12                 150.00    970.07    117.02      0.50
      13                 100.00    968.87    118.67      0.50
      21                 150.00    971.55    117.66      0.50
      22                 200.00    969.08    118.76      0.50
      23                 150.00    968.65    120.74      0.50
      31                 100.00    967.39    115.86      0.50
      32                 100.00    965.69    110.79      0.50
      9                -1866.18    800.00      0.00      1.00  Reservoir
      2                  766.18    970.00     52.00      1.00  Tank
    

    Epanet-msx simulation results

    Results of a multi-species simulation by Epanet-msx can be read as well.

    The read.msxrpt() function creates an s3 object of class epanetmsx.rpt. Similar to the approach above, there is a data frame for node results and link results.

    Usage with other packages

    ggplot2

    The ggplot2 package makes it easy to create complex graphics by allowing users to describe the plot in terms of the data. Continuing the Net1 example Here we plot chlorine concentration over time at each node in the network.

    library(ggplot2)
    qplot( data= n1r$nodeResults,  
           x = timeInSeconds/3600, y = Chlorine, 
           facets = ~ID, xlab = "Hour")  

    Net 1 Cl plot

    Animation

    The animation package is useful for creating a video from successive plots.

    # example with animation package 
    library(animation)
    
    #unique time stamps
    ts <- unique((n1r$nodeResults$Timestamp))
    imax <- length(ts)
    
    # generate animation of plots at each time step
    saveHTML(
      for( i in 1:imax){
        plot(n1r, n1, Timestep = ts[i]) 
      }
    )

    References

    Rossman, L. A. (2000) Epanet 2 users manual. US EPA, Cincinnati, Ohio.

    Visit original content creator repository https://github.com/bradleyjeck/epanetReader
  • epanetReader

    Build Status Coverage Status CRAN RStudio mirror downloads CRAN version

    epanetReader

    epanetReader is an R package for reading water network simulation data in Epanet’s .inp and .rpt formats into R. Some basic summary information and plots are also provided.

    Epanet is a highly popular tool for water network simulation. But, it can be difficult to access network information for subsequent analysis and visualization. This is a real strength of R however, and there many tools already existing in R to support analysis and visualization.

    In addition to this README page, information about epanetReader is available from Environmental Modelling & Software (pdf) and ASCE Conference Proceedings (pdf).

    Installation

    • the latest released version: install.packages(“epanetReader”)
    • the development version: devtools::install_github(“bradleyjeck/epanetReader”)

    Getting Started

    Network files

    Read network information from an .inp file with a similar syntax as the popular read.table or read.csv functions. Note that the example network one that ships with Epanet causes a warning. The warning is just a reminder of how R deals with integer IDs.

    library(epanetReader)
    n1 <- read.inp("Net1.inp") 
    ## Warning in PATTERNS(allLines): patterns have integer IDs, see ?epanet.inp
    
    ## Warning in CURVES(allLines): curves have integer IDs, see ?epanet.inp
    

    Retrieve summary information about the network.

    summary(n1)
    ## EPANET Example Network 1
    ## A simple example of modeling chlorine decay. Both bulk and
    ## wall reactions are included.
    ## 
    ##             Number
    ## Junctions        9
    ## Tanks            1
    ## Reservoirs       1
    ## Pipes           12
    ## Pumps            1
    ## Quality         11
    ## Coordinates     11
    ## Labels           3
    

    A basic network plot is also available

    plot(n1)

    Net 1 plot

    The read.inp function returns an object with structure similar to the .inp file itself. A section in the .inp file corresponds to a named entry in the list. These entries are accessed using the $ syntax of R.

    names(n1)
    ##  [1] "Title"       "Junctions"   "Tanks"       "Reservoirs"  "Pipes"      
    ##  [6] "Pumps"       "Valves"      "Demands"     "Patterns"    "Curves"     
    ## [11] "Controls"    "Rules"       "Energy"      "Status"      "Emitters"   
    ## [16] "Quality"     "Sources"     "Reactions"   "Mixing"      "Times"      
    ## [21] "Report"      "Options"     "Coordinates" "Vertices"    "Labels"     
    ## [26] "Backdrop"    "Tags"
    

    Sections of the .inp file are stored as a data.frame or character vector. For example, the junction table is stored as a data.frame and retrieved as follows. In this case patterns were not specified in the junction table and so are marked NA.

    n1$Junctions
    ##   ID Elevation Demand Pattern
    ## 1 10       710      0      NA
    ## 2 11       710    150      NA
    ## 3 12       700    150      NA
    ## 4 13       695    100      NA
    ## 5 21       700    150      NA
    ## 6 22       695    200      NA
    ## 7 23       690    150      NA
    ## 8 31       700    100      NA
    ## 9 32       710    100      NA
    

    A summary of the junction table shows that Net1.inp has nine junctions with elevations ranging from 690 to 710 and demands ranging from 0 to 200. Note that the node ID is stored as a character rather than an integer or factor.

    summary(n1$Junctions)
    ##       ID              Elevation         Demand      Pattern       
    ##  Length:9           Min.   :690.0   Min.   :  0.0   Mode:logical  
    ##  Class :character   1st Qu.:695.0   1st Qu.:100.0   NA's:9        
    ##  Mode  :character   Median :700.0   Median :150.0                 
    ##                     Mean   :701.1   Mean   :122.2                 
    ##                     3rd Qu.:710.0   3rd Qu.:150.0                 
    ##                     Max.   :710.0   Max.   :200.0
    

    Epanet Simulation Results

    Results of the network simulation specified in Net.inp may be stored in Net1.rpt by running Epanet from the command line. Note that the report section of the .inp file should contain the following lines in order to generate output readable by this package.

    [REPORT]
    Page 0
    Links All
    Nodes All

    On windows, calling the epanet executable epanet2d runs the simulation.

    >epanet2d Net1.inp Net1.rpt 
    
    ... EPANET Version 2.0
    
      o Retrieving network data
      o Computing hydraulics 
      o Computing water quality
      o Writing output report to Net1.rpt
    
    ... EPANET completed.
    

    The .rpt file generated by Epanet may be read into R using read.rpt(). The simulation is summarized over junctions, tanks and pipes.

    n1r <- read.rpt("Net1.rpt") 
    summary(n1r)
    ## Contains node results for  25 time steps 
    ## 
    ## Summary of Junction Results: 
    ##      Demand         Pressure        Chlorine     
    ##  Min.   :  0.0   Min.   :106.8   Min.   :0.1500  
    ##  1st Qu.: 80.0   1st Qu.:116.1   1st Qu.:0.3500  
    ##  Median :120.0   Median :119.8   Median :0.5100  
    ##  Mean   :122.2   Mean   :119.6   Mean   :0.5434  
    ##  3rd Qu.:160.0   3rd Qu.:123.0   3rd Qu.:0.7400  
    ##  Max.   :320.0   Max.   :133.9   Max.   :1.0000  
    ## 
    ## Summary of Tank Results:
    ##      Demand             Pressure        Chlorine    
    ##  Min.   :-1100.000   Min.   :48.22   Min.   :0.590  
    ##  1st Qu.: -660.000   1st Qu.:52.00   1st Qu.:0.660  
    ##  Median :  258.000   Median :55.52   Median :0.750  
    ##  Mean   :   -5.741   Mean   :54.86   Mean   :0.764  
    ##  3rd Qu.:  505.380   3rd Qu.:57.54   3rd Qu.:0.850  
    ##  Max.   : 1029.420   Max.   :60.04   Max.   :1.000  
    ## 
    ## Contains link results for  25 time steps 
    ## 
    ## Summary of Pipe Results:
    ##       Flow             Velocity         Headloss    
    ##  Min.   :-1029.42   Min.   :0.0000   Min.   :0.000  
    ##  1st Qu.:   41.37   1st Qu.:0.3475   1st Qu.:0.110  
    ##  Median :  113.08   Median :0.5700   Median :0.300  
    ##  Mean   :  245.35   Mean   :0.8070   Mean   :0.644  
    ##  3rd Qu.:  237.23   3rd Qu.:1.0075   3rd Qu.:0.755  
    ##  Max.   : 1909.42   Max.   :2.7300   Max.   :3.210  
    ## 
    ## Energy Usage:
    ##   Pump usageFactor avgEfficiency kWh_per_Mgal avg_kW peak_kW dailyCost
    ## 1    9       57.71            75       880.42  96.25   96.71         0
    

    The default plot of simulation results is a map for time period 00:00:00. Note that the object created from the .inp file is a required argument to make the plot.

    plot( n1r, n1)

    Net 1 rpt plot

    In contrast to the treatment of .inp files described above, data from .rpt files is stored using a slightly different structure than the .rpt file. The function returns an object (list) with a data.frame for node results and data.frame for link results. These two data frames contain results from all the time periods. This storage choice was made to facilitate time series plots.

    Entries in the epanet.rpt object (list) created by read.rpt() are found using the names() function.

    names(n1r)
    ## [1] "nodeResults" "linkResults" "energyUsage"
    

    Results for a chosen time period can be retrieved using the subset function.

    subset(n1r$nodeResults, Timestamp == "0:00:00")
    ##    ID   Demand    Head Pressure Chlorine      note Timestamp timeInSeconds
    ## 1  10     0.00 1004.35   127.54      0.5             0:00:00             0
    ## 2  11   150.00  985.23   119.26      0.5             0:00:00             0
    ## 3  12   150.00  970.07   117.02      0.5             0:00:00             0
    ## 4  13   100.00  968.87   118.67      0.5             0:00:00             0
    ## 5  21   150.00  971.55   117.66      0.5             0:00:00             0
    ## 6  22   200.00  969.08   118.76      0.5             0:00:00             0
    ## 7  23   150.00  968.65   120.74      0.5             0:00:00             0
    ## 8  31   100.00  967.39   115.86      0.5             0:00:00             0
    ## 9  32   100.00  965.69   110.79      0.5             0:00:00             0
    ## 10  9 -1866.18  800.00     0.00      1.0 Reservoir   0:00:00             0
    ## 11  2   766.18  970.00    52.00      1.0      Tank   0:00:00             0
    ##     nodeType
    ## 1   Junction
    ## 2   Junction
    ## 3   Junction
    ## 4   Junction
    ## 5   Junction
    ## 6   Junction
    ## 7   Junction
    ## 8   Junction
    ## 9   Junction
    ## 10 Reservoir
    ## 11      Tank
    

    A comparison with the corresponding entry of the .rpt file, shown below for reference, shows that four columns have been added to the table. These pieces of extra info make visualizing the results easier.

      Node Results at 0:00:00 hrs:
      --------------------------------------------------------
                         Demand      Head  Pressure  Chlorine
      Node                  gpm        ft       psi      mg/L
      --------------------------------------------------------
      10                   0.00   1004.35    127.54      0.50
      11                 150.00    985.23    119.26      0.50
      12                 150.00    970.07    117.02      0.50
      13                 100.00    968.87    118.67      0.50
      21                 150.00    971.55    117.66      0.50
      22                 200.00    969.08    118.76      0.50
      23                 150.00    968.65    120.74      0.50
      31                 100.00    967.39    115.86      0.50
      32                 100.00    965.69    110.79      0.50
      9                -1866.18    800.00      0.00      1.00  Reservoir
      2                  766.18    970.00     52.00      1.00  Tank
    

    Epanet-msx simulation results

    Results of a multi-species simulation by Epanet-msx can be read as well.

    The read.msxrpt() function creates an s3 object of class epanetmsx.rpt. Similar to the approach above, there is a data frame for node results and link results.

    Usage with other packages

    ggplot2

    The ggplot2 package makes it easy to create complex graphics by allowing users to describe the plot in terms of the data. Continuing the Net1 example Here we plot chlorine concentration over time at each node in the network.

    library(ggplot2)
    qplot( data= n1r$nodeResults,  
           x = timeInSeconds/3600, y = Chlorine, 
           facets = ~ID, xlab = "Hour")  

    Net 1 Cl plot

    Animation

    The animation package is useful for creating a video from successive plots.

    # example with animation package 
    library(animation)
    
    #unique time stamps
    ts <- unique((n1r$nodeResults$Timestamp))
    imax <- length(ts)
    
    # generate animation of plots at each time step
    saveHTML(
      for( i in 1:imax){
        plot(n1r, n1, Timestep = ts[i]) 
      }
    )

    References

    Rossman, L. A. (2000) Epanet 2 users manual. US EPA, Cincinnati, Ohio.

    Visit original content creator repository https://github.com/bradleyjeck/epanetReader
  • JohannesSteu.JwtAuth

    JohannesSteu.JwtAuth

    This package is a simple demo how to implement a jwt authentication in Neos Flow.
    For more details about the JSON Web token itself check https://jwt.io/introduction/.

    This mechanism is a great choice to sign for api requests in flow.

    This package contains

    JwtToken

    This class represents a JWT token. This token contains the JWT string wich is sent in your request. The JWT string must be provided in a X-JWT Header.
    The payload itself must contain a property accountIdentifier.

    JwtTokenProvider

    The JwtTokenProvider validates a JwtToken. It will first check if the token contains a jwt string at all and then try to decode it with a configured shared secret. If the payload can be decoded it will create a transient account with the data from the payload and set this account as authenticated.

    Access data from the payload in flow

    This demo implementation will set the full payload into the authenticated token. To access the data
    in your flow application:

    $authenticationToken = $this->securityContext->getAuthenticationTokensOfType(JwtToken::class)[0];
    $jwtPayload = $authenticationToken->getPayload();
    
    Example Request

    This is a valid request and will be authenticated with the role JohannesSteu.JwtAuth:User in flow:

    curl -H "X-JWT=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhY2NvdW50SWRlbnRpZmllciI6InNvbWUtYWNjb3VudCIsIm5hbWUiOiJKb2huIERvZSJ9.8slTfTqCRozgcby-As6KxeCb5Zq9zX3TmVUcJAgW328" http://your-app.com
    

    To debug the jwt string click here.
    Enter the shared secret aSharedSecret to verify the signature.

    Visit original content creator repository
    https://github.com/johannessteu/JohannesSteu.JwtAuth

  • GRDBDiff

    🚧 EXPERIMENTAL – DON’T USE IN PRODUCTION 🚧

    GRDBDiff

    Various diff algorithms for SQLite, based on GRDB.

    Since it is possible to observe database changes, it is a natural desire to compute diffs between two consecutive observed values.

    There are many diff algorithms, which perform various kinds of comparisons. GRDBDiff ships with a few of them. Make sure you pick one that suits your needs.

    Demo Application

    The repository comes with a demo application that shows you:

    • How to synchronize the annotations on a map view with the content of the database.
    • How to animate a table view according to changes in the database.
    PlayersViewController Screenshot PlacesViewController Screenshot

    Set Differences

    What are the elements that were inserted, updated, deleted?

    This is the question that Set Differences can answer.

    Set Differences do not care about the ordering of elements. They are well suited, for example, for synchronizing the annotations in a map view with the content of the database. But they can not animate table views or collection views, which care a lot about the ordering of their cells.

    You track Set Differences with those three ValueObservation methods:

    extension ValueObservation {
        func setDifferencesFromRequest(...) -> ValueObservation
        func setDifferencesFromRequest(startingFrom:...) -> ValueObservation
        func setDifferences(...) -> ValueObservation
    }

    For example:

    // Track favorite places
    let request = Place.filter(Column("favorite")).orderedByPrimaryKey()
    let observer = try ValueObservation
        .trackingAll(request)
        .setDifferencesFromRequest()
        .start(in: dbQueue) { diff: SetDiff<Place> in
            print(diff.inserted) // [Place]
            print(diff.updated)  // [Place]
            print(diff.deleted)  // [Place]
        }

    You will choose one method or the other, depending on the type of the observed values. Record types can use setDifferencesFromRequest. The more general setDifferences variant requires a type that conform to both Identifiable and the standard Equatable protocols.

    setDifferencesFromRequest()

    Usage

    // 1.
    struct Place: FetchableRecord, TableRecord { ... }
    
    // 2.
    let request = Place.orderedByPrimaryKey()
    
    // 3.
    let placesObservation = ValueObservation.trackingAll(request)
    
    // 4.
    let diffObservation = placesObservation.setDifferencesFromRequest()
    
    // 5.
    let observer = diffObservation.start(in: dbQueue) { diff: SetDiff<Place> in
        print(diff.inserted) // [Place]
        print(diff.updated)  // [Place]
        print(diff.deleted)  // [Place]
    }
    1. Define a Record type that conforms to both FetchableRecord and TableRecord protocols.

      FetchableRecord makes it possible to fetch places from the database. TableRecord provides the database primary key for places, which allows to identity places, and decide if they were inserted, updated, or deleted.

    2. Define a database request of the records you are interested in. Make sure the request is ordered by primary key. You’ll get wrong results if the request is not properly ordered.

      Ordering records by primary key provides an efficient O(N) computation of diffs.

    3. Define a ValueObservation from the request, with the ValueObservation.trackingAll method.

    4. Derive a Set Differences observation with the setDifferencesFromRequest method.

    5. Start the observation and enjoy your diffs!

    The Optional onUpdate Parameter

    By default, the records notified in the diff.updated array are newly created values.

    When you need to customize handling of updated records, provide a onUpdate closure. Its first parameter is an old record. The second one is a new database row. It returns the record that should be notified in diff.updated. It does not run on the main queue.

    For example, this observation prints changes:

    let diffObservation = placesObservation
        .setDifferencesFromRequest(onUpdate: { (place: Place, row: Row) in
            let newPlace = Place(row: row)
            print("changes: \(newPlace.databaseChanges(from: place))")
            return newPlace
        })

    And this other one reuses record instances:

    let diffObservation = placesObservation
        .setDifferencesFromRequest(onUpdate: { (place: Place, row: Row) in
            place.update(from: row)
            return place
        })

    setDifferencesFromRequest(startingFrom:)

    This method gives the same results as setDifferencesFromRequest(). The differences are:

    • The tracked record type must conform to a PersistableRecord protocol, on top of FetchableRecord and TableRecord.

    • The startingFrom parameter is passed an array of records used to compute the first diff. Make sure this array is ordered by primary key. You’ll get wrong results otherwise.

    setDifferences()

    Usage

    // 1.
    struct Element: Identifiable, Equatable { ... }
    
    // 2.
    let elementsObservation = ValueObservation.tracking...
    
    // 3.
    let diffObservation = elementsObservation.setDifferences()
    
    // 4.
    let observer = diffObservation.start(in: dbQueue) { diff: SetDiff<Element> in
        print(diff.inserted) // [Element]
        print(diff.updated)  // [Element]
        print(diff.deleted)  // [Element]
    }
    1. Define a type that conforms to both Identifiable and the standard Equatable protocols.

      Those two protocol allow to decide which elements they were inserted, updated, or deleted.

    2. Define a ValueObservation which notifies elements. Elements must be sorted by identity. They must not contain two elements with the same identity. You’ll get wrong results otherwise.

      Ordering elements by primary key provides an efficient O(N) computation of diffs.

    3. Derive a Set Differences observation with the setDifferences method.

    4. Start the observation and enjoy your diffs!

    The Identifiable Protocol

    protocol Identifiable {
        associatedtype Identity: Equatable
        var identity: Identity { get }
    }

    Identifiable is the protocol for “identifiable” values, which have an identity.

    When an identifiable type also adopts the Equatable protocol, two values that are equal must have the same identity. It is a programmer error to break this rule.

    However, two values that share the same identity may not be equal. In GRDBDiff, a value has been “updated” if two versions share the same identity, but are not equal.

    The Optional onUpdate Parameter

    When you need to customize handling of updated elements, provide a onUpdate closure. Its first parameter is an old element. The second one is a new element. It returns the element that should be notified in diff.updated. It does not run on the main queue.

    For example, this observation reuses element instances:

    let diffObservation = elementsObservation
        .setDifferences(onUpdate: { (old: Element, new: Element) in
            old.update(from: new)
            return old
        })

    The Optional startingFrom Parameter

    The startingFrom parameter is passed an array of elements used to compute the first diff. Make sure this array is ordered by identity, and does not contain two elements with the same identity. You’ll get wrong results otherwise.

    UITableView and UICollectionView Animations

    GRDBDiff does not ship with any diff algorithm able to perform such animation.

    But you can leverage third-party libraries. See the demo application for an example of integration of Differ with GRDB.

    Visit original content creator repository https://github.com/groue/GRDBDiff
  • font

    Font

    Tags CI Status Dependencies License

    This is a simple deno module providing wasm bindings to the fontdue for font rasterization and layout with support for TrueType (.ttf/.ttc) and OpenType (.otf).

    Example

    Rasterization

    import { Font } from "https://deno.land/x/font/mod.ts";
    
    // Read the font data.
    const data = await Deno.readFile("../Roboto-Regular.ttf");
    // Parse it into the font type.
    const font = new Font(font);
    // Rasterize and get the layout metrics for the letter 'g' at 17px.
    let { metrics, bitmap } = font.rasterize("g", 17.0);

    Prerequisites

    prerequisite installation
    deno deno_install
    rust rustup
    rustfmt rustup component add rustfmt
    rust-clippy rustup component add clippy
    wasm-pack cargo install wasm-pack

    Development

    build

    $ deno run --unstable --allow-read --allow-write --allow-run scripts/build.ts
    building rust                  ("wasm-pack build --target web --release")
    read wasm                      (size: 150856 bytes)
    compressed wasm using lz4      (reduction: 58651 bytes, size: 92205 bytes)
    encoded wasm using base64      (increase: 30735 bytes, size: 122940 bytes)
    read js                        (size: 5895 bytes)
    inlined js and wasm            (size: 129016 bytes)
    minified js                    (size reduction: 3100 bytes, size: 125916 bytes)
    writing output to file         (wasm.js)
    final size is: 125916 bytes

    clean

    $ deno run --unstable --allow-read --allow-write --allow-run scripts/clean.ts
    cleaning cargo build           ("cargo clean")
    removing pkg

    fmt

    $ deno run --unstable --allow-run scripts/fmt.ts
    formatting typescript          ("deno --unstable fmt scripts/ test_deps.ts test.ts mod.ts")
    Checked 9 files
    formatting rust                ("cargo fmt")

    lint

    $ deno run --unstable --allow-run scripts/lint.ts
    linting typescript             ("deno --unstable lint scripts test_deps.ts test.ts mod.ts")
    Checked 9 files
    linting rust                   ("cargo clippy -q")

    Testing

    Requires the wasm.js file to be built first.

    $ deno test
    running 1 tests
    test add ... ok (2ms)
    
    test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (2ms)

    Other

    Contribution

    Pull request, issues and feedback are very welcome. Code style is formatted with deno fmt and commit messages are done following Conventional Commits spec.

    Licence

    Copyright 2021, Denosaurs. All rights reserved. MIT license.

    Visit original content creator repository https://github.com/denosaurs/font
  • ncaahoopR

    ncaahoopR

    ncaahoopR is an R package for working with NCAA Basketball Play-by-Play Data. It scrapes play-by-play data and returns it to the user in a tidy format, allowing the user to explore the data with assist networks, shot charts, and in-game win-probability charts.

    For pre-scraped schedules, rosters, box scores, and play-by-play data, check out the ncaahoopR_data repository.

    To see the lastest changes in version 1.5, view the change log here.

    Installation

    You can install ncaahoopR from GitHub with:

    # install.packages("devtools")
    devtools::install_github("lbenz730/ncaahoopR")

    If you encounter installation issues, the following tips have helped a few users successfully install the package:

    • If given the option to compile any packages from source rather than installing existing binaries, choose 'No'.
    • Windows users with trouble installing the package should try running the following command before reinstalling the package: Sys.setenv(R_REMOTES_NO_ERRORS_FROM_WARNINGS = "true")
    • Windows users with trouble installing devtools should try first installing the backports package via install.packages("backports").

    Functions

    Several functions use ESPN game_ids. You can find the game_id in the URL for the game summary, as shown in the URL for the summary of the UMBC-Virginia game below. game_id

    Scraping Data

    • get_pbp(team, season): Get entire current season’s worth of play-by-play data for a given team and season. season defaults to current season, but can be specified in “2019-20” form.

    • get_pbp_game(game_ids, extra_parse): Get play-by-play data for a specific vector of ESPN game_ids. extra_parse is a logical whether to link shot variables and attempt possesion parsing. Default = TRUE.

    • get_roster(team, season): Get a particular team’s roster. season defaults to current season, but can be specified in “2019-20” form.

    • get_schedule(team, season): Get a team’s schedule. season defaults to current season, but can be specified in “2019-20” form.

    • get_game_ids(team, season): Get a vector of ESPN game_ids for all games involving team specified. season defaults to current season, but can be specified in “2019-20” form.

    • get_master_schedule(date): Get schedule of all games for given date. Use YYYY-MM-DD date formatting.

    • get_boxscore(game_id): Returns list of 2 data frames, one with each teams’ box score for the game in question. Written by Jared Andrews.

    • season_boxscore(team, season = current_season, aggregate = 'average'): Returns (aggregated) player stats over the course of a season for a given team. * team: team to return player stats for. * season: of form YYYY-YY. Defaults to current season. * aggregate: one of ‘average’ (per-game average statistics), ‘total’ (sums of season stats) or ‘raw’ (just return all box scores binded together). ‘average’ is the default. Contributed in collaboration with Kurt Wirth

    The team parameter in the above functions must be a valid team name from the ids dataset built into the package. See the Datasets section below for more details.

    Win-Probability and Game-Flow Charts

    Win Probability Charts

    The latest function for plotting win probability charts is wp_chart_new. Following the 2021-22 season other win probability chart functions will be deprecated and replaced by this function (it will be renamed to wp_chart but I don’t want to break any existing pipelines during the season). It no longer requires users to input colors. For best results consider saving via ggsave(filename, height = 9/1.2, width = 16/1.2) (or some other 16/9 aspect ratio.)

    wp_chart_new(game_id, home_col = NULL, away_col = NULL, include_spread = T, show_legend = T)

    • game_id ESPN game_id for the desired contest.
    • home_col Chart color for home team (if NULL will default to ncaa_colors primary_color field).
    • away_col: Chart color for away team (if NULL will default to ncaa_colors primary_color field).
    • include_spread: Logical, whether to include pre-game spread in Win Probability calculations. Default = TRUE.
    • show_legend: Logical, whether or not to show legend/text on chart. Default = TRUE.

    A prior version of wp_chart used base R while gg_wp_chart used the ggplot2 plotting library. As of the 2020-21 season, both functions call the same ggplot2 library, and gg_wp_chart now simply aliases wp_chart

    wp_chart(game_id, home_col, away_col, include_spread = T, show_legend = T)

    • game_id ESPN game_id for the desired contest.
    • home_col Chart color for home team.
    • away_col: Chart color for away team.
    • include_spread: Logical, whether to include pre-game spread in Win Probability calculations. Default = TRUE.
    • show_legend: Logical, whether or not to show legend/text on chart. Default = TRUE.

    gg_wp_chart(game_id, home_col, away_col, show_labels = T)

    • game_id ESPN game_id for the desired contest.
    • home_col Chart color for home team.
    • away_col: Chart color for away team.
    • include_spread: Logical, whether to include pre-game spread in Win Probability calculations. Default = TRUE.
    • show_labels: Logical whether Game Excitement Index and Minimum Win Probability metrics should be displayed on the plot. Default = TRUE.

    Game Flow Charts

    game_flow(game_id, home_col, away_col)

    • game_id ESPN game_id for the desired contest.
    • home_col Chart color for home team.
    • away_col Chart color for away team.

    Game Excitement Index

    game_exciment_index(game_id, include_spread = T)

    • include_spread: Logical, whether to include pre-game spread in Win Probability calculations. Default = TRUE.

    Returns GEI (Game Excitement Index) for given ESPN game_id. For more information about how these win-probability charts are fit and how Game Excitement Index is calculated, check out the below links

    Game Control Measures

    average_win_prob(game_id, include_spread = T)

    • ESPN game_id for which to compute time-based average win probability (from perspective of home team).
    • include_spread: Logical, whether to include pre-game spread in Win Probability calculations. Default = TRUE.

    average_score_diff(game_id)

    • ESPN game_id for which to compute time-based average score differential (from perspective of home team).

    Assist Networks

    Traditional Assist Networks

    assist_net(team, season, node_col, three_weights = T, threshold = T, message = NA, return_stats = T)

    • team is the ESPN team name, as listed in the ids data frame.
    • season Options include “2018-19” (for entire season), or a vector of ESPN game IDs.
    • node_col is the node color for the graph.
    • three_weights (default = TRUE): Logical. If TRUE, assisted three-point shots are given a weight of 1.5. If FALSE, assisted three-point shots are given a weight of 1. In both cases, assisted two-point shots are given a weight of 1.
    • threshold (default = 0) Number between 0-1 indicating minimum percentage of team’s assisted baskets a player needs to be involved in to be included in network graph.
    • message (default = NA) Option for custom message to replace graph title when using a subset of the season (e.g. conference play).
    • return_stats (default = TRUE) Return Assist Network-related statistics

    Circle Assist Networks and Player Highlighting

    circle_assist_net(team, season, highlight_player = NA, highlight_color = NA, three_weights = T, threshold = 0, message = NA, return_stats = T)

    • team is the ESPN team name, as listed in the ids data frame.
    • season: Options include “YYYY-YY” (for entire season), or a vector of ESPN game IDs.
    • highlight_player (default = NA) Name of player to highlight in assist network. NA yields full-team assist network with no player highlighting.
    • highlight_color (default = NA) Color of player links to be highlighted. NA if highlight_player is NA.
    • three_weights (default = TRUE): Logical. If TRUE, assisted three-point shots are given a weight of 1.5. If FALSE, assisted three-point shots are given a weight of 1. In both cases, assisted two-point shots are given a weight of 1.
    • threshold (default = 0) Number between 0-1 indicating minimum percentage of team’s assisted baskets a player needs to be involved in to be included in network graph.
    • message (default = NA) User-supplied plot title to overwrite default plot title, if desired.
    • return_stats (default = TRUE) Return Assist Network-related statistics

    Shot Charts

    There are currently three functions for scraping and plotting shot location data. These functions are written by Meyappan Subbaiah.

    get_shot_locs(game_id): Returns data frame with shot location data when available. Note that if the extra_parse flag in get_pbp_game is set to TRUE, shot location data will already be included in the play-by-play data (if available).

    • game_id: ESPN game_id from which shot locations should be scraped.

    game_shot_chart(game_id, heatmap = F): Plots shots for a given game.

    • game_id: ESPN game_id from which shot locations should be scraped.
    • heatmap (default = FALSE): Logical, whether to use density-heat map or plot individual points.
    • shot-plotting colors derived from team’s primary color listed in ncaa_colors data frame.

    team_shot_chart(game_ids, team, heatmap = F): Plots shots taken by team during a given set of game(s).

    • game_ids: Vector of ESPN game_ids from which shot locations should be scraped.
    • team: Which team to chart shots for.
    • heatmap (default = FALSE): Logical, whether to use density-heat map or plot individual points.
    • shot-plotting colors derived from team’s primary color listed in ncaa_colors data frame.

    opp_shot_chart(game_ids, team, heatmap = F): Plots shots against a team during a given set of game(s).

    • game_ids: Vector of ESPN game_ids from which shot locations should be scraped.
    • team: Which team to chart opponents’ shots for.
    • heatmap (default = FALSE): Logical, whether to use density-heat map or plot individual points.

    Datasets

    dict A data frame for converting between team names from various sites.

    • NCAA: the name of the team, as listed on the NCAA website
    • ESPN: the name of the team, as listed in ESPN URLs
    • ESPN_PBP: the name of the team, as listed in the ESPN Play-By-Play logs
    • Warren_Nolan: the name of the team, as listed on WarrenNolan.com
    • Trank: the name of the team, as listed on barttorvik.com
    • name_247: the name of the team, as listed on 247Sports.com

    ids A data frame for converting between team names from various sites.

    • team: the name of the team to be supplied to functions in ncaahoopR package
    • id: team id; used in ESPN URLs
    • link: link; used in ESPN URLs
    • “`espn_abbrv`: Short 3-4 character code used in ESPN abbreviations

    ncaa_colors A data frame of team color hex codes, pulled from teamcolorcodes.com. Additional data coverage provided by Luke Morris.

    • ncaa_name: The name of the team, as listed on the NCAA website (same as dict$NCAA)
    • espn_name: The name of the team, as listed in ESPN URLs (same as dict$ESPN)}
    • primary_color: Hexcode for team’s primary color.
    • secondary_color: Hexcode for team’s secondary color, when available.
    • tertiary_color: Hexcode for team’s tertiary color, when available.
    • color_4: Hexcode for team’s 4th color, when available.
    • color_5: Hexcode for team’s 5th color, when available.
    • color_6: Hexcode for team’s 6th color, when available.

    Available Colors Primary and secondary colors for all 353 teams.

    These datasets can be loaded by typing data("ids"), data("ncaa_colors"), or data("dict"), respectively.

    Examples

    Win Probability Charts

    wp3 wp_chart_new(401403405)

    wp wp_chart(game_id = 401082978, home_col = "gray", away_col = "orange")

    wp2 wp_chart(game_id = 401168364, home_col = "#7BAFD4", away_col = "#001A57")

    Game Flow Chart

    game_flow game_flow(game_id = 401082669, home_col = "blue", away_col = "navy")

    Single-Game Assist Network

    Assist Single assist_net(team = "Oklahoma", node_col = "firebrick4", season = 400989185)

    Season-Long Assist Network

    Assist All assist_net(team = "Yale", node_col = "royalblue4", season = "2017-18")

    Circle Assist Networks

    UNC circle_assist_net(team = "UNC", season = 401082861)

    Player Highlighting

    Frankie Ferrari circle_assist_net(team = "San Francisco", season = "2018-19", highlight_player = "Frankie Ferrari", highlight_color = "#FDBB30")

    Shot Charts

    heatmap game_shot_chart(game_id = 401168364, heatmap = T)

    shotchart game_shot_chart(game_id = 401168364)

    Glossary

    Play-by-Play files contain the following variables:

    • game_id: ESPN game_id for the game in question.
    • date: Date of game.
    • home: Name of the home team.
    • away: Name of the away team.
    • play_id: Unique identifier of play/event in sequence of game events.
    • half: Period of action in the game. 1 and 2 denote the first and second halves of play, while 3 denotes OT1, 4 denotes OT2 etc.
    • time_remaining_half: Time remaining in the period as it would appear on a scoreboard.
    • secs_remaining: Time remaining in regulation, in seconds.
    • secs_remaining_absolute: The time remaining until the game is over, in seconds. For example a game that goes to overtime would begin with 2700 seconds remaining (2400 for regulation and 300 for overtime), and regulation would end with 300 seconds remaining.
    • description: A description of the play/game event.
    • action_team: home/away indicating action
    • home_score: Home team’s score.
    • away_score: Away team’s score.
    • score_diff: Score differential from the home team’s perspective (home_scoreaway_score)
    • play_length: Duration of the the given play, in seconds.
    • scoring_play: Boolean indicating scoring play.
    • foul: Boolean indicating foul.
    • win_prob: Win probability for the home team.
    • naive_win_prob: Win probability for the home team not factoring in pre-game point spread. Useful for computation of win probability added (WPA).
    • home_timeout_remaining: Number of timeouts remaining for the home team.
    • away_timeout_remaining: Number of timeouts remaining for the away team.
    • home_favored_by: Number of points by which the home team is favored, prior to tip-off. If Vegas point spread is available on ESPN, that is used as the default. When not available, an attempt is made to impute the pre-game point spread from derived team strengths. Imputed point spreads are not available for games prior to the 2016-17 season or when one of the teams is not in Division 1.
    • total_line: Total Vegas over/under for the game, where available.
    • referees Referees for the game.
    • arena_location: City in which the game was played.
    • arena: Name of arena where game was played.
    • capacity: Capacity of arena where game was played.
    • attendance: Attendance of game, where available.
    • wrong_time: An attempt to label play-by-play events tagged at the wrong time. These are filtered out of all graphical and statistical helper functions, but may still be useful for certain analyses where time of event is of less importance.

    If extra_parse = TRUE in get_pbp_game, the following variables are also included.

    • shot_x: The half-court x coordinate of shot.
    • shot_y: The half-court y coordinate of shot. (0,0) represents the bottom left corner and (50, 47) represents the top right corner (from persepective of standing under hoop).
    • shot_team: Name of team taking shot.
    • shot_outcome: Whether the shot was made or missed.
    • shooter: Name of player taking shot.
    • assist: Name of player asssisting shot (assisted shots only)
    • three_pt: Logical, if shot is 3-point field goal attempt.
    • free_throw: Logical, if shot is free throw attempt.

    Stand-alone shot location data frames contain the following variables.

    • team_name: Name of shooting team.
    • shot_text: Description of shot.
    • color: Color hexcode used to render shot chart graphic on ESPN.
    • date: Date of game
    • outcome: Whether the shot was made or missed
    • shooter: Player attempting the shot
    • assister: Playing assisting the shot
    • three_pt: Logical, whether the shot is a 3-point attempt
    • x: x-coordinate of shot location
    • y: y-coordinate of shot location

    Raw Shot Location Data

    The court is 94 feet long (baseline to baseline, interior) and 50 feet wide (sideline to sideline, interior). The court’s origin is located at center court, with the court being displayed in a horizontal fashion (the baskets lie along the x axis). In this coordinate grid, -x corresponds to the left basket and +x to the right. +y corresponds to the upper sideline of the court, and -y to the lower.

    Following ESPN’s convention, the home team’s shot locations are on the +x basket, and the visiting team’s on the -y basket. The center of each basket is at (+/-41.75, 0).

    The data pulled via get_shot_locs() follows this orientation.

    Shot Chart Data

    For the shot chart functions, the x and y coordinates are “flipped” such that the court is oriented vertically, and each team would appear to be shooting on the same basket. That is, the home team and away team are both shooting on a basket centered at (0, -41.75). This is done out of convenience and does not affect any underlying analyses

    Visit original content creator repository https://github.com/lbenz730/ncaahoopR
  • Blog

    Build Status Total Downloads Latest Stable Version License

    Description

    This is a blog application created with an MVC architecture.

    Technologies

    • Laravel 8
    • Laravel Livewire
    • Laravel Jetstream
    • Laravel Permission
    • Laravel Collective
    • MySql Database
    • Blade Templates Frontend
    • Tailwind CSS
    • AdminLTE

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Laravel Sponsors

    We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.

    Premium Partners

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository https://github.com/TomasDep/Blog
  • Blog

    Build Status Total Downloads Latest Stable Version License

    Description

    This is a blog application created with an MVC architecture.

    Technologies

    • Laravel 8
    • Laravel Livewire
    • Laravel Jetstream
    • Laravel Permission
    • Laravel Collective
    • MySql Database
    • Blade Templates Frontend
    • Tailwind CSS
    • AdminLTE

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 1500 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Laravel Sponsors

    We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.

    Premium Partners

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository https://github.com/TomasDep/Blog
  • AzureMobileClient.Helpers

    AzureMobileClient.Helpers

    AzureMobileClient.Helpers is a lightweight toolkit for using the Microsoft Azure Mobile Client. It provides a set of abstractions and base classes that are based originally on the Samples from Adrian Hall, along with a few tweaks to follow best practices with an interface based design.

    Note that this library has been aligned with the Microsoft.Azure.Mobile.Client and is offered using NetStandard1.4 and as such is not compatible with traditional PCL projects. For this reason, it is recommended that you check out the Prism Templates I have available for dotnet new which use a NetStandard1.4 common library for the shared code.

    Package Version MyGet
    AzureMobileClient.Helpers HelpersShield HelpersMyGetShield
    AzureMobileClient.Helpers.Autofac HelpersAutofacShield HelpersAutofacMyGetShield
    AzureMobileClient.Helpers.DryIoc HelpersDryIocShield HelpersDryIocMyGetShield
    AzureMobileClient.Helpers.SimpleInjector HelpersSimpleInjectorShield HelpersSimpleInjectorMyGetShield
    AzureMobileClient.Helpers.Unity HelpersUnityShield HelpersUnityMyGetShield
    AzureMobileClient.Helpers.AzureActiveDirectory HelpersAADShield HelpersAADMyGetShield

    Support

    If this project helped you reduce time to develop and made your app better, please help support this project.

    paypal

    Resources

    Setting up the library for Dependency Injection

    The following examples are based on using DryIoc in a Prism Application:

    protected override void RegisterTypes()
    {
        // ICloudTable is only needed for Online Only data
        Container.Register(typeof(ICloudTable<>), typeof(AzureCloudTable<>), Reuse.Singleton);
        Container.Register(typeof(ICloudSyncTable<>), typeof(AzureCloudSyncTable<>), Reuse.Singleton);
    
        Container.UseInstance<IPublicClientApplication>(new PublicClientApplication(Secrets.AuthClientId, AppConstants.Authority)
        {
            RedirectUri = AppConstants.RedirectUri
        });
    
        Container.RegisterMany<AADOptions>(reuse: Reuse.Singleton,
                                           serviceTypeCondition: type =>
                                                    type == typeof(IAADOptions) ||
                                                    type == typeof(IAADLoginProviderOptions));
    
        Container.Register<IAzureCloudServiceOptions, AppServiceContextOptions>(Reuse.Singleton);
        Container.RegisterMany<AppDataContext>(reuse: Reuse.Singleton,
                                               serviceTypeCondition: type => 
                                                    type == typeof(IAppDataContext) ||
                                                    type == typeof(ICloudService));
        Container.RegisterDelegate<IMobileServiceClient>(factoryDelegate: r => r.Resolve<ICloudService>().Client,
                                                         reuse: Reuse.Singleton,
                                                         setup: Setup.With(allowDisposableTransient: true));
        Container.Register<ILoginProvider<AADAccount>,LoginProvider>(Reuse.Singleton);
    }
    public class AwesomeAppCloudServiceOptions : IAzureCloudServiceOptions
    {
        public string AppServiceEndpoint => "https://yourappname.azurewebsites.net";
        public string AlternateLoginHost => string.Empty;
        public string LoginUriPrefix => string.Empty;
        public HttpMessageHandler[] Handlers => new HttpMessageHandler[0];
    }
    
    public class AwesomeAppCustomerAppContext : DryIocCloudAppContext
    {
        public MyAppClient(IContainer container)
            // We can optionally pass in a database name
            : base(container, "myDatabaseName.db")
        {
    
        }
    
        /*
         * NOTE: This is architected to be similar to Entity Framework in that
         * the CloudAppContext will look for properties that are ICloudSyncTable<>
         * so that it can register the Model type with the SQLite Store.
         */
        public ICloudSyncTable<Customer> Customers => SyncTable<Customer>();
        public ICloudSyncTable<Invoice> Invoices => SyncTable<Invoice>();
        public ICloudSyncTable<InvoiceItem> InvoiceItems => SyncTable<InvoiceItem>();
        public ICloudTable<Feedback> Feedback => Table<Feedback>();
    
    }
    
    public class Customer : EntityData
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
    }
    
    public class Invoice : EntityData
    {
        public string CustomerId { get; set; }
    }
    
    public class InvoiceItem : EntityData
    {
        public string InvoiceId { get; set; }
        public string ItemId { get; set; }
        public int Quantity { get; set; }
    }
    
    public class Feedback : EntityData
    {
        public string Message { get; set; }
        public string Status { get; set; }
    }
    Visit original content creator repository https://github.com/dansiegel/AzureMobileClient.Helpers
  • Android-Networking-Basics

    Android-Networking-Basics

    Hi Friends!

    WHO IS THIS REPO FOR?

    If you are a Android Developer and have read about Networking in Android, but wanna practice and understand these
    Networking concepts,then this GitHub Repo is for you.

    WHAT ALL ANDROID CONCEPTS IT COVERS?

    • JSON Parser in Android
    • ASyncTask (Working in Background Thread)
    • ProgressDialog

    WHAT I HAVE DONE IN THIS PROJECT?

    Its a simple App demonstrating how we can get some data from internet and show it in our app. To demonstrate this, I have used
    the following API : http://mobileappdatabase.in/demo/smartnews/app_dashboard/jsonUrl/single-article.php?article-id=71

    The data we get from here is in JSON format. Thus, we have used JSON class in Android to read and get the specific values
    that we want to display.

    Also, I am showing an image (from url) in this app. Therefore, I have used an external library
    named Glide: https://github.com/bumptech/glide to load and display the image from URL.

    I have provided comments everywhere in the code to give you more details and help you in understanding the concepts.

    Refernence of this App making taken from : http://abhiandroid.com/programming/asynctask

    Visit original content creator repository
    https://github.com/akshaychopra96/Android-Networking-Basics