Monday, November 20, 2023

VSCode

 Notes on VSCode

JSON jump to opening closing bracket: ctrl - shift - \


JSONPath plugin

Note, shoudl be added to general json plugin

Can run jsonpath expressions in vscode. Alternative is to use https://jsonpath.com/ 

Once plugin is added ctrl-shift-p to get command pallette

Run Jsonpath: Extract json data with their paths

Some samples: (From nexus response)

$.components.length   = 33

$.components[0].displayName

$.components[0].licenseData

$.components[*].displayName

$.components[*].licenseData.effectiveLicenseThreats

$.components[*].licenseData.effectiveLicenseThreats.length

$.components[*].securityData.securityIssues

$.components[*].dependencyData

RESTClient

1/ Add variables for hosts / usernames passwords per env


Create folder .vscode/settings.json

e.g.  

Then chose and env from the bottem right

{
    "rest-client.environmentVariables": {
"dev" : {
            "url" : "htttp://urlDev",
            "user": "blah",
            "password":"myPass"
}
         "uat" : {
            "url" : "htttp://urlUat",
            "user": "blah2",
            "password":"myPass2"
}
}
}

2/ Capture results and reuse

e.g. These examples are of queries reports from NexusIQ

see getAppIdResp contains the request and response, and we use this to take the applciationId and use it in the next answer (Using jsonPath). Note be careful to spot arrays.

### Get application Id by name
@name getAppIdResp
GET {{url}}/api/v2/applications?publicId=My Project
Authorization: Basic {{user}} {{token}}

### Get individual reports
@name getReports
GET {{url}}/api/v2/reports/applications/{{getAppIdResp.response.body.$.applications[0].id}}
Authorization: Basic {{user}} {{token}}

### Using reportDataUrl from previous result we get the link to the raw data
@name getReport
GET {{url}}/{{getReports.response.body.$[0].reportDataUrl}}
Authorization: Basic {{user}} {{token}}


### Using reportDataUrl from previous result we get the link to the raw data
@name report
GET {{url}}/api/v2/applications/CAASBUILD-nexus-pipeline/reports/b38675be326741678d907d3d35dac573/raw
Authorization: Basic {{user}} {{token}}


### Using reportDataUrl from change context to policy to get policy violations
GET {{url}}/api/v2/applications/CAASBUILD-nexus-pipeline/reports/b38675be326741678d907d3d35dac573/policy
Authorization: Basic {{user}} {{token}}


Nice!

Tuesday, December 06, 2022

API Governance

 This article is a draft docuemnt outlining the typical rules that should be adhered to as part of an API governance regime.

Goals

The goals of this documents are to make clear the rules, and procedures that will be adopted by  us, the API vendor, in terms of updating, deprecating, and removing  of API calls. The rules around communicating with end users to inform them of changes, or new releases. and the timelines around this.

These guidelines are to ensure a mutual understanding between vendor and client, and to ensure that both actors work in the best interests of all users of the system. 

Versioning

The goal is to try and minimize changes where possible.

The tools will be split into two main entities. 
1/ Client
The client libraries will be autogenerated from the server build.
This ensures that all client libraries will be compatible with the server code from where they were generated. As a general rule, client versions will follow semver versioning. Client releaes will be less frequent than server rleeases, as they will only need to be updated once API changes are made.

1/ Server. This will have a higher cadence of change that the client as internal functionality may change without effecting the Api contract. As such it will follow  semver versioning where intenal changes will be labeled with patch version increases. A minor version change willb e a change involving extra parameters that is not considered a breaking change, and finally breaking changes will be represented by a major version change

Client libraries

Client libraries will be publised in Artifactory.
They are available here:
Documentation on the versions will be published here:
By default we will gnerate Java clients, however clients for may different versions are possible using the openapi-gnerator tool, so teams may generate their own versions of the documentation by running the tool, and pointing it to the latest Api definition, as documentated.

API docs

The API documentation will be published via swagger-UI and is available from here: URL

Change Process

In general we will try to follow the following change process, However there maybe occasions (e.g. regulatory changes etc) which may force us to deviate from this process

Once a new version of any API call is rolled out, it will come with a new version of the client
All exisitng users of the API will get an email (sent to the administrators of the Project), informing them of this new release.
The old API will be marked as deprecated in the new client but will continue to be available for a period of 1 year.
After this period the old code will be removed from the server, and old clients may fail.
After 6 months we will begin warning teams who are still using the old API of this upcoming change.


Friday, November 05, 2021

TODO

 Write REST server with swagger , e.g. FeatureFlag api.

Write client with (maybe feign, or equivilent)

Write ControllerAdvice for handling errors

deploy to Openshift


Also create jenkins instance on cloud. 

Add pipeline extension for deploying to linode or google.


Add vault / consul


Google DB.. use it

Wednesday, January 22, 2020

java Keytool

I found this site with a bash script for checking expiry dates of certificates on java keytool

https://www.davidgouveia.net/2013/07/simple-script-to-check-expiry-dates-on-a-java-keystore-jks-file/

./checkCertificate --keystore [YOUR_KEYSTORE_FILE] --password [YOUR_PASSWORD] --threshold [THRESHOLD_IN_DAYS]

Very useful, and can also be integrated with Nagios.

I made a some small adjustments to allow you to automatically delete expired certs. Also changed the timeout command as it wasn't working with my RHEL 6

./checkCertificate --keystore [YOUR_KEYSTORE_FILE] --password [YOUR_PASSWORD] --threshold [THRESHOLD_IN_DAYS] [--delete-expired]


#!/bin/sh

########################################################
#
#       Check certificates inside a java keystore
#
########################################################
#TIMEOUT="timeout -k 10s 5s "
TIMEOUT="timeout 10s "
KEYTOOL="$TIMEOUT keytool"
THRESHOLD_IN_DAYS="30"
KEYSTORE=""
PASSWORD=""
DELETE_EXPIRED=false
RET=0

ARGS=`getopt -o "p:k:t:" -l "password:,keystore:,threshold:,delete-expired" -n "$0" -- "$@"`

function usage {
        echo "Usage: $0 --keystore [--password ] [--threshold ] [--delete-expired]"
        exit
}



function start {
        CURRENT=`date +%s`

        THRESHOLD=$(($CURRENT + ($THRESHOLD_IN_DAYS*24*60*60)))
        if [ $THRESHOLD -le $CURRENT ]; then
                echo "[ERROR] Invalid date."
                exit 1
        fi
        echo "Looking for certificates inside the keystore $(basename $KEYSTORE) expiring in $THRESHOLD_IN_DAYS day(s)...Deleting Expired $DELETE_EXPIRED"

        $KEYTOOL -list -v -keystore "$KEYSTORE"  $PASSWORD 2>&1 > /dev/null
        if [ $? -gt 0 ]; then echo "Error opening the keystore."; exit 1; fi

        $KEYTOOL -list -v -keystore "$KEYSTORE"  $PASSWORD | grep Alias | awk '{print $3}' | while read ALIAS
        do
                #Iterate through all the certificate alias
                EXPIRACY=`$KEYTOOL -list -v -keystore "$KEYSTORE"  $PASSWORD -alias $ALIAS | grep Valid`
                UNTIL=`$KEYTOOL -list -v -keystore "$KEYSTORE"  $PASSWORD -alias $ALIAS | grep Valid | perl -ne 'if(/until: (.*?)\n/) { print "$1\n"; }'`
                UNTIL_SECONDS=`date -d "$UNTIL" +%s`
                REMAINING_DAYS=$(( ($UNTIL_SECONDS -  $(date +%s)) / 60 / 60 / 24 ))
                if [ $THRESHOLD -le $UNTIL_SECONDS ]; then
                        echo "[OK]      Certificate $ALIAS expires in '$UNTIL' ($REMAINING_DAYS day(s) remaining)."
                else
                        echo "[WARNING] Certificate $ALIAS expires in '$UNTIL' ($REMAINING_DAYS day(s) remaining)."
                        if $DELETE_EXPIRED && [ $REMAINING_DAYS -lt 0 ]; then
                                $KEYTOOL -delete -v -keystore "$KEYSTORE" -alias $ALIAS  $PASSWORD
                        fi
                        RET=1
                fi

        done
        echo "Finished..."
        exit $RET
}

eval set -- "$ARGS"

while true
do
        case "$1" in
                -p|--password)
                        if [ -n "$2" ]; then PASSWORD=" -storepass $2"; else echo "Invalid password"; exit 1; fi
                        shift 2;;
                -k|--keystore)
                        if [ ! -f "$2" ]; then echo "Keystore not found: $1"; exit 1; else KEYSTORE=$2; fi
                        shift 2;;
                -t|--threshold)
                        if [ -n "$2" ] && [[ $2 =~ ^[0-9]+$ ]]; then THRESHOLD_IN_DAYS=$2; else echo "Invalid threshold"; exit 1; fi
                        shift 2;;
                --delete-expired)
                        DELETE_EXPIRED=true
                        shift 1;;
                --)
                        shift
                        break;;
        esac
done

if [ -n "$KEYSTORE" ]
then
        start
else
        usage
fi

Friday, December 27, 2019

Web testing with Selenium/ Geb / WebDriver

Spock

@Stepwise forces tests to run sequentially in declared order (even if parallel spec runner). If a method fails, remaining tests will be skipped


To check if tests shoudl be run or skipped you can use annotations @Requires or @IgnoreIf
However theese only work with System variables, (not instance variables)

For instance variable assesment you can use junit Assume.assumeTrue function. See https://stackoverflow.com/questions/33818907/programmatically-skip-a-test-in-spock
It throws an exception that spock is already catching and thus ends up ignoring the test case
 

Geb / Selenium

just some notes on WebTesting

WebDriver comes with 4 finders https://www.w3.org/TR/webdriver1/#element-retrieval

The Find ElementFind ElementsFind Element From Element, and Find Elements From Element commands allow lookup of individual elements and collections of elements.

The parameter passed to the find method specifies the Locator strategy

heres some examples using the webdriver directly

// By class
List elementsList = driver.findElements(By.className("myClass"))
// by xpath
WebElement element = driver.findElement(By.xpath("myClass")) // xpath expressions are easily copyable from Chrome deveolper tab
//By CSS (using GEb waitFor here )
waitFor (10) { driver.findElement(By.cssSelector("table.table")).displayed }



Note with cssSelctor you can use :nth-child to subselect a specific element (This is similar to :eq operator in jquery however the count is 0 based in jquery but 1 based in nth-child). Note also that this might not be obvious. I was trying to select what appeared to be the third table but if you use the copy selector option from chrome it was (apparently) the 7th child
e.g
$("body > table:nth-child(3)")
in jquery can also use $("body > table:eq(3)") // Note zero based index

For information on Geb goto The Book of Geb. It has some nice groovy syntax to remove some boiler plate logic.

Testing in Chrome

You can test your selectors in Chrome
  • Press F12 to open up Chrome DevTools.
  • Switch to Console panel.
  • Type in XPath like $x(“.//*[@id=’id’]”) to evaluate and validate.
  • Type in CSS selectors like $$(“#id”) to evaluate and validate.
  • Check results returned from console execution.

Page Object Pattern

As well as writing tests directly you can abstract pages away with the page Object Pattern.


import geb.Page

class WwwSchoolsAjaxExamplePage extends Page{

  private  static final DEFAULT_DIV_CONTENT = 'Let AJAX change this text'

  static url = "ajax/ajax_example.asp"

  static at = {
      title=="AJAX Example"
  }

  static content = {
      theButton { $(".example button") }
      theResultDiv { $("#myDiv") }
  }

  def makeRequest() {
      theButton.click()
      waitFor { theResultDiv.text()!=DEFAULT_DIV_CONTENT }
  }

}

Monday, October 14, 2019

Global gradle settings

If you have a local repo for all dependwencies you can include thi sin a global init.gradle file so that all builds pick it up (akin to the settings.xml in Maven)

init.gradle is stored in your $GRADLE_HOME (e.g. $HOME/.gradle)

e.g. here is an example

allprojects {
repositories {
        mavenLocal()
maven { url "http://myartifactory/repo1-cache"}
maven { url "http://myartifactory/libs-release"}
maven { url "http://myartifactory/libs-snapshot"}
}
buildscript {
repositories {
mavenLocal()
maven { url "http://myartifactory/repo1-cache"}
maven { url "http://myartifactory/libs-release"}
maven { url "http://myartifactory/libs-snapshot"}
}
}
}

If you need this updated in intelliJ , then you should run gradle warpper, and update the gradle/wrapper/gradle.properties, fixing the distributionUrl value 

Thursday, August 01, 2019

zip file handling groovy/ java

This is just a simple prog to correlate dates and times of files in a zip file. We used it to verify delivery times externally sourced data. Its got some extra stuff in there since we only wanted a subset of the files.

import groovy.transform.Field

import java.text.DateFormat
import java.text.SimpleDateFormat
import java.util.regex.Matcher
import java.util.regex.Pattern
import java.util.zip.ZipEntry
                                   
                                    
import java.util.zip.ZipFile
import java.util.zip.ZipOutputStream


class FileEntry{
   String filename;
   String dir;
   Date date;
   String day;
   String time;
   DateFormat dayDf = new SimpleDateFormat("yyyy-MM-dd");
   DateFormat timeDf = new SimpleDateFormat("HH:mm:ss");
   
   FileEntry(String filename, Date d){
       this(filename, null, d);
   }
   FileEntry(String filename, String dir, Date d){
       this.filename = filename;
       this.date=d;
       day = dayDf.format(d);
       time = timeDf.format(d);
   }
   
   public String toString(){
       return filename+","+day+","+time+"\n";
   }
}


String zipFileName = "c:/temp/dropdirsAug2018.zip"
if(args.length>0)
    zipFileName=args[0]
ZipFile inputZip
try{
    inputZip = new ZipFile(zipFileName)
}catch(Exception e){
    println "Cannot find expected file $zipFileName"
    return
}

/*
 * Table listing files to check 
 * inName is the path and name of input files (including asterisk wildcard)
 * outName is the output name. 
 */
def zipFilesToMove = [
        [inName:"/path/*filename*.csv", outName:"<#>.filename.csv"],
        //[inName:"/path/otherfile.xml", outName:"otherfile.xml", outDir:"/notUsedHere"]

]


def list = doUnzip(inputZip, zipFilesToMove)

//Clean up old folders
//println list
writeToFile(list)
println "Done"

void writeToFile(List fileEntry){
println "Writing to file"
    File f = new File("output.csv");
    f.text = "";
    int x=0;
    for(FileEntry line:fileEntry){
        f << line.toString();
        println line
        x++;
    }
    println "Written $x lines"
}

// Do unzip for specified files
public List doUnzip(ZipFile inputZip, List zipFilesToCheck){
    List ret = new ArrayList();


    //Loop through bigZip to find files we want.
    // if we find a file we unzip it based ondate to different folder
    Enumeration e = inputZip.entries()
    while(e.hasMoreElements()) {
        ZipEntry zipEntry = e.nextElement()
        if (zipEntry.isDirectory()) {
            continue;
        }
        FileEntry fileEntry = checkIfValidZipEntry(zipEntry, zipFilesToCheck)
        if(fileEntry)
            ret << fileEntry;
    }
    return ret
}

public FileEntry checkIfValidZipEntry(ZipEntry zipEntry, List zipFilesToCheck){
    String zipEntryName = zipEntry.getName();
    long timestamp = zipEntry.getTime()
    for(Map zipFileToCheck: zipFilesToCheck){
        String regex = getRegex(zipFileToCheck.inName)
        // Note String matches, checks if whole string matches, not substring. ? Must use this instead
        Pattern p = Pattern.compile(regex);
        Matcher m = p.matcher(zipEntryName);
        boolean match = m.find()
        if(match){
            Date d = new Date();
            d.setTime(zipEntry.getTime())
            return new FileEntry(zipFileToCheck.inName, d);
        }        
    }
    //println "No match for $zipEntryName.. Ignoring"
    return null
}

String getRegex(String filename){
    return filename.replaceAll("\\*", ".*").replaceAll("\\/","\\\\/")
}

Monday, June 24, 2019

SaltStack

Some notes

Get formulas from https://github.com/saltstack-formulas/ e.g. for HAProxy
 https://github.com/saltstack-formulas/haproxy-formula

Steps below is condensed (and includes more explicit commands than on this page https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html)

To use curl 
curl -LOk https://github.com/saltstack-formulas/haproxy-formula.git
Unzip Add to salt Master file_roots sudo vi /etc/salt/master
file_roots:
  base:
    - /srv/salt
    - /srv/formulas/apache-formula
Restart Salt Master
sudo pkill salt-master 
sudo salt-master -d 

Run state e.g. for haproxy
sudo salt '*' state.apply haproxy.install

Salt States

Backup folder
backup_folder:
  file.copy:
    - name: {{ folder-name }}.bak.{{ None|strftime("%Y-%m-%d_%H_%M") }}
    - source: {{folder-name}}
    - user: {{ user}}
    - group: {{group}}


Set variable to latest filename
{%- set fileName = salt['file.find']('/var/publish/',type='f', name='PackageToPublish-1.*.tar.gz')  | last -%}

Include Another state
Say we have a service stop state in a file called service/stop.sls
stop_service:
  service.dead:
    - nameservice

If we are in the same folder and want to include it, we include it usings its filename (with any directories in front)
However in the require step we just include the id name (can also have idnetifies like service: pkg:  etc)

include:
  - service.stop

upgrade_archive_unpacked:
  archive.extracted:
    - name: {{ pillar['root_dir'] }}/{{ pillar['service']['upgrade'] }}
    - source:  {{ pillar['service']['source'] }}
    - source_hash: {{ pillar['service']['source_hash'] }}
    - user: {{ pillar['user'] }}
    - group: {{ pillar['group'] }}
    - overwriteTrue
    - enforce_ownership_on: {{ pillar['root_dir'] }}
    - enforce_toplevelFalse
    - options"--strip-components=1"
    - require
      - stop_service


Rollback to backup
{%- set rollbackFolderTuple = salt['file.find']('/pathToSearch/', type='d', name='dirname.bak.*', print='mtime,name')| sort  | last -%}
{%- set rollbackFolder = rollbackFolderTuple[1] %}
rollback_folder:
  file.rename:
    - name: {{ rollbackFolder }}
    - source: {{folder-name}}
    - user: {{ user}}
    - group: {{group}}

Tuesday, April 09, 2019

Postgres

Cheat sheets

https://gist.github.com/Kartones/dd3ff5ec5ea238d4c546

https://gist.github.com/apolloclark/ea5466d5929e63043dcf

Number of active connections by DB and IP
select count(*),datname, client_addr from pg_stat_activity group by datname, client_addr;

Note can also use ps to show number of active processes
 ps -ef |grep -i postgres

To just show all connections to a particular DB
select substring(query,0,90),state,query_start,pid from pg_stat_activity where datname='DBNAME' order by query_start;

Locked queries
Can use something like this to show locks

SELECT blocked_locks.pid     AS blocked_pid,
         blocking_locks.pid     AS blocking_pid,
         blocking_activity.state AS blocking_state,
         blocking_activity.query_start AS blocking_query_start,
         substring(blocked_activity.query,0,60)    AS blocked_statement,
         substring(blocking_activity.query,0,60)   AS current_statement_in_blocking_process,
         blocked_activity.datname AS db
    FROM  pg_catalog.pg_locks         blocked_locks
     JOIN pg_catalog.pg_stat_activity blocked_activity  ON blocked_activity.pid = blocked_locks.pid
     JOIN pg_catalog.pg_locks         blocking_locks
         ON blocking_locks.locktype = blocked_locks.locktype
         AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
         AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
         AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
         AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
         AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
         AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
         AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
         AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
         AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
         AND blocking_locks.pid != blocked_locks.pid
     JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
    WHERE NOT blocked_locks.GRANTED ORDER BY blocking_activity.query_start;

Permissions and pg_hba

Permissions are controlled by the pg_hba file

To find out where this is run

show hba_file;

Normally somewhere like /var/lib/pgsql/10/data/pg_hba.conf

By default you will not be able to psql -U postgres unless you are the postgres user (in linux)  (You will get fatal Peer authentication failed.. See  https://gist.github.com/AtulKsol/4470d377b448e56468baef85af7fd614).

I have seen this setup to allow all local users get acess


# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# Default is host    all             all             127.0.0.1/32            ident


Data

Postgres stores in files in data folder.
The default is something like this
/var/lib/postgresql/9.5/main
Also seen /var/lib/pgsql/9.6/data/,  Default dir is /usr/local/pgsql/data
Run this to find out actual location

SHOW data_directory

New Setup

After installing

e.g. for postgres 10.

# Init DB (using default data folder)
sudo service postgresql-10 initdb
sudo service postgresql-10 start
exit
sudo su - postgres
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/10/data/pg_hba.conf.orig
# See below for allowing local users to login as postgres user
vi /var/lib/pgsql/10/data/pg_hba.conf
sudo service postgresql-10 reload
exit
sudo su -
psql -U postgres
    > CREATE ROLE NOSUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
    > ALTER USER WITH PASSWORD '';
    > \q

Backup/ Restore

SQL Dump

The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:

pg_dump dbname > dumpfile

Can be more specific. e.g. -n to backup individucal schema, -t for individual table
pg_dump -Fc %DATABASE% -f %DUMP_FILE_PATH%

As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration.

Restore

Non-text file dumps are restored using the pg_restore utility. Text files can use psql
psql dbname < dumpfile

where dumpfile is the file output by the pg_dump command. The database dbname will not be created by this command, so you must create it yourself from template0 before executing psql (e.g., with createdb -T template0 dbname)

By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs:
psql --set ON_ERROR_STOP=on dbname < dumpfile
Either way, you will only have a partially restored database. 
pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the pg_dumpall program is provided. pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is:
pg_dumpall > dumpfile
The resulting dump can be restored with psql:

Barman

Barman is the postgres Backup and ARchive Manager. See http://docs.pgbarman.org/release/2.12/
It will backup the databases configured on a DB server.

Otions.. Streaming (prefered) vs rsync

This is a simple script we used to keep a certain number of fodler (backups). Note it will only delete 1 folder ( the oldest) at a time. So if you have way more folders, you may need to manually delete them first.

 #!/bin/bash

dir="<barmanDir>/Local/base/"
min_dirs=3 // If there are more dirs than this we will delete the oldest

[[ $(find "$dir" -maxdepth 1 -type d | wc -l) -ge $min_dirs ]] &&
IFS= read -r -d $'\0' line < <(find "$dir" -maxdepth 1 -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
file="${line#* }"
ls -lLd "$file"
rm -rf "$file"

Starting stopping

service postgresql-9.6 initdb
chkconfig postgresql-9.6 on
service postgresql-9.6 start

Monday, November 05, 2018

Windows Active Directory Groups/ Roles

To list a users Active Directory groups run this

net user /domain

Problem with this is that it is limited to 21 characters.

Here is a Windows Powershell command to do the same (less memorable though)

(New-Object System.DirectoryServices.DirectorySearcher("(&(objectCategory=User)(samAccountName=$($env:username)))")).FindOne().GetDirectoryEntry().memberOf
or
([ADSISEARCHER]"samaccountname=$($env:USERNAME)").Findone().Properties.memberof

Tuesday, January 10, 2017

Unit testing Spring caching with grails

Grails unit tests do no autowire by default (certainly in version 2.2.3) , so to enable caching in a unit test we had to jump through a few hoops.

The easiest thing in the end was to manually create an xml to load the bean in question. (Once we created the bean in the xml, then the cachable annotations were recognized)
This worked in terms of loading the bean with the caching functionality built in, but then we began to run into class cast exceptions, because of the way that spring implements the caching (using proxys). See http://spring.io/blog/2012/05/23/transactions-caching-and-aop-understanding-proxy-usage-in-spring

The easiest solution we found to this, was to create an interface for the service in question. Then the proxying was able to cast the dynamically generated proxyClass to the interface.

Test xml (in test/unit)

       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:cache="http://www.springframework.org/schema/cache"
       xmlns:p="http://www.springframework.org/schema/p"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd">

   

   
   
          class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cache-manager-ref="ehcache"/>

   
   
          class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="TestEhCache.xml"/>




EhCache.xml (in grails-app/conf)


        xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
        xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd">


   
                  maxElementsInMemory='100'
                  overflowToDisk='false' />

   
           maxElementsInMemory="100"
           eternal="false"
           timeToIdleSeconds="3600"
           timeToLiveSeconds="0"
           overflowToDisk="false"
           memoryStoreEvictionPolicy="LFU"/>




Interface
import org.springframework.cache.annotation.Cacheable
interface MyServiceIF  {

    // calling stored procedure to determine the as_of_date
    @Cacheable("priorToDate")
    public Date priorToDate(String yyyymmdd);

}

Class
class MyService implements MyServiceIF  {

    static transactional = false

    public Date priorToDate(String yyyymmdd) {
        return evaluate(yyyymmdd, -1);
    }
}

Spock Test

Note also that if you are declaring a method cacheable, with multiple parameters, then you may want to define a keyGenerator, (or ignore the params)
 public void validateCache() {
        given:
        CacheManager cacheManager = ctx.getBean("cacheManager")
        String result;

        when:
        Cache dateCache = cacheManager.getCache(testName);
        String result1FromCache  = dateCache.get(dateToTest);   // Verify that the cache is empty
        Object resultFromSds
        Object result2FromSds
        Object result2FromCache
        if(testName =="futureBusinessDate" || testName == "pastBusinessDate"){
            resultFromSds = dateToString(daoService."$testName"(dateToTest,1 ))
            Object key = new DefaultKeyGenerator().generate(daoService, DalSdsDateIF.class.getMethod(testName, String.class, int.class), dateToTest, 1)   //compund params, so must generate key
            result2FromCache = dateToString(dateCache.get(key).get());
            result2FromSds = dateToString(daoService."$testName"(dateToTest,1 ))
        } else  {
            resultFromSds = dateToString(daoService."$testName"(dateToTest) )
            result2FromCache = dateToString(dateCache.get(dateToTest).get());
            result2FromSds = dateToString(daoService."$testName"(dateToTest) ) // expect this come from cache, so will not call log again
        }

        then:
        dateCache!=null
        result1FromCache==null    //verify cache is empty
        resultFromSds==expectedResult
        result2FromCache==expectedResult
        result2FromSds==expectedResult
        count ==expectedCallsToLog    // count number of calls  to log.info.. Expect one per call, except for isBusinessDate

        where:
        testName            |  dateToTest   | expectedResult | expectedCallsToLog
        "priorToDate"       | "2016-09-06"  | "2016-09-02"   | 1
        "nextToDate"        | "2016-09-02"  | "2016-09-06"   | 1
        "futureBusinessDate"| "2016-09-02"  | "2016-09-06"   | 1
        "pastBusinessDate"  | "2016-09-06"  | "2016-09-02"   | 1

    }

e.g. in the test
Object key = new DefaultKeyGenerator().generate(daoService, DalSdsDateIF.class.getMethod(testName, String.class, int.class), dateToTest, 1)   //compund params, so must generate key


If you have parameters in the method call that you don't want influsencing the cahce (e.g. ignroe them you can do this, or this)
e.g. to ignore params you can do this

 @Cacheable(value="myCache", key="#root.methodName")// Force key name to be fixed no matter what params passed in
 public Map getValues(List warnings){