Welcome to the what did I learn today series. The intention of this blog spot is to compose the stuff that I learnt day-to-day basics and journal it here. Please follow the series and provide your support.
See you in the series.
Welcome to the what did I learn today series. The intention of this blog spot is to compose the stuff that I learnt day-to-day basics and journal it here. Please follow the series and provide your support.
See you in the series.
A group of friends is going on holiday together. They have to come a meeting point(the start of the journey) using N cars. There are P[K] people and S[K] seats in the K-th car for K in range [0 .. N-1]. Some of the seats in the cars may be free, so it is possible for some of the friends to change the car they are in. The friends have decided in order to be ecological, they will leave some cars parked at the meeting point and travel with as few cars as possible.
Given to arrays P and S, consisting of N integers each, returns the minimum number of cars needed to take all of the friends on holiday.
Example:
1. Given P = [1,4,1] and S = [1, 5, 1], the function should return 2. A person from car number 0 can travel in car number 1 instead. This way, car number 0 can be left parked at the meeting point.
2. Given P = [4,4,2,4] and S = [5,5,2,5], the function should return 3. One person from car number 2 can travel in car number 0 and the other person from car number 2 can travel in car number 3.
3. Given P = [2,3,4,2] and S = [2,5,7,2], the function should return 2. Passengers from car number 0 can travel in car number 1 and passengers from car number 3 can travel in car number 2.
Write an efficient algorithm for the following assumptions:
class Solution {
public int solution(int []P, int[] S) {
Arrays.sort(S);
int sum = 0;
for(int people: P) sum += people;
S = descending(S);
int total = 0;
int i;
for(i = 0; i < S.length && seatSum < sum; i++) {
seatSum += S[i];
} return i;
}
}
};
NOTIFY
and LISTEN
. Here’s what the documentation has to say on them:NOTIFY provides a simple interprocess communication mechanism for a collection of processes accessing the same PostgreSQL database. A payload string can be sent along with the notification, and higher-level mechanisms for passing structured data can be built by using tables in the database to pass additional data from notifier to listener(s).Whenever the command NOTIFY channel is invoked, either by this session or another one connected to the same database, all the sessions currently listening on that notification channel are notified, and each will in turn notify its connected client application.LISTEN registers the current session as a listener on the notification channel named channel. If the current session is already registered as a listener for this notification channel, nothing is done.
orders
table, that except for the primary key also contains an email address to identify the person who placed the order and a bigint
field to store the total order amount in cents:CREATE TABLE orders (
id SERIAL PRIMARY KEY,
email TEXT NOT NULL,
total BIGINT NOT NULL
);
CREATE OR REPLACE FUNCTION notify_event() RETURNS TRIGGER AS $$
DECLARE
record RECORD;
payload JSON;
BEGIN
IF (TG_OP = 'DELETE') THEN
record = OLD;
ELSE
record = NEW;
END IF;
payload = json_build_object('table', TG_TABLE_NAME,
'action', TG_OP,
'data', row_to_json(record));
PERFORM pg_notify('events', payload::text);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
TG_OP
special variable to decide which version of the row we want to serialize.json_build_object
and row_to_json
to generate the notification payload.pg_notify
to broadcast a message on the events
channel.NULL
since this is an AFTER
trigger.notify_order_event
trigger, which will call this function after we perform a CRUD operation on the orders
table:CREATE TRIGGER notify_order_event
AFTER INSERT OR UPDATE OR DELETE ON orders
FOR EACH ROW EXECUTE PROCEDURE notify_event();
events
channel:LISTEN events;
INSERT into orders (email, total) VALUES ('test@example.com', 10000);
INSERT 0 1
Asynchronous notification "events" with payload "{"table" : "orders", "action" : "INSERT", "data" : {"id":1,"email":"test@example.com","total":10000}}" received from server process with PID 5315.
psql
session, so let’s add another listener.require 'sequel'
DB = Sequel.connect('postgres://user@localhost/notify-test')
puts 'Listening for DB events...'
DB.listen(:events, loop: true) do |_channel, _pid, payload|
puts payload
end
Sequel::Postgres::Database#listen
to listen for events in a loop.→ ruby test.rb
Listening for DB events...
{"table" : "orders", "action" : "INSERT", "data" : {"id":2,"email":"test@example.com","total":10000}}
./bin/elasticsearch-plugin install repository-gcs
curl -XPUT 'localhost:9200/_snapshot/GceRepository?pretty' -d ' { "type": "gcs", "settings": { "bucket": "elkp", "service_account": "service-acc.json" } }' {acknowleged: true}
curl -XGET 'localhost:9200/_snapshot/_all?pretty'
{ "GceRepository" : { "type" : "gcs", "settings" : { "bucket" : "elkp", "service_account" : "service-acc.json" } } }
curl -XPUT 'localhost:9200/_snapshot/GceRepository/snapshot_1?wait_for_completion=true'
wait_for_completion
parameter specifies whether or not the request should return immediately after snapshot initialization (default) or wait for snapshot completion. During snapshot initialization, information about all previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or even minutes) for this command to return even if thewait_for_completion
parameter is set to false
” - Straight from ElasticSearch.curl -XPOST 'http://localhost:9200/_snapshot/GceRepository/snapshot_1/_restore'
## status for a currently running snapshot GET /_snapshot/_status ## status for a given repository GET /_snapshot/GceRepository/_status ## status for a given snapshot id. GET /_snapshot/backups/GceRepository/snapshot_1/_status ## deleting a snapshot DELETE /_snapshot/GceRespository/snapshot_1
-e -> file exists -a -> file exists This is identical in effect to -e. It has been "deprecated," [1] and its use is discouraged. -f -> file is a regular file (not a directory or device file) -s -> file is not zero size -d -> file is a directory -b -> file is a block device -c -> file is a character device device0="/dev/sda2" # / (root directory) if [ -b "$device0" ] then echo "$device0 is a block device." fi # /dev/sda2 is a block device. device1="/dev/ttyS1" # PCMCIA modem card. if [ -c "$device1" ] then echo "$device1 is a character device." fi # /dev/ttyS1 is a character device. -p -> file is a pipe function show_input_type() { [ -p /dev/fd/0 ] && echo PIPE || echo STDIN } show_input_type "Input" # STDIN echo "Input" | show_input_type # PIPE # This example courtesy of Carl Anderson. -h -> file is a symbolic link -L -> file is a symbolic link -S -> file is a socket -t -> file (descriptor) is associated with a terminal device This test option may be used to check whether the stdin [ -t 0 ] or stdout [ -t 1 ] in a given script is a terminal. -r -> file has read permission (for the user running the test) -w -> file has write permission (for the user running the test) -x -> file has execute permission (for the user running the test) -g -> set-group-id (sgid) flag set on file or directory If a directory has the sgid flag set, then a file created within that directory belongs to the group that owns the directory, not necessarily to the group of the user who created the file. This may be useful for a directory shared by a workgroup. -u -> set-user-id (suid) flag set on file A binary owned by root with set-user-id flag set runs with root privileges, even when an ordinary user invokes it. [2] This is useful for executables (such as pppd and cdrecord) that need to access system hardware. Lacking the suid flag, these binaries could not be invoked by a non-root user. -rwsr-xr-t 1 root 178236 Oct 2 2000 /usr/sbin/pppd A file with the suid flag set shows an s in its permissions. -k -> sticky bit set Commonly known as the sticky bit, the save-text-mode flag is a special type of file permission. If a file has this flag set, that file will be kept in cache memory, for quicker access. [3] If set on a directory, it restricts write permission. Setting the sticky bit adds a t to the permissions on the file or directory listing. This restricts altering or deleting specific files in that directory to the owner of those files. drwxrwxrwt 7 root 1024 May 19 21:26 tmp/ If a user does not own a directory that has the sticky bit set, but has write permission in that directory, she can only delete those files that she owns in it. This keeps users from inadvertently overwriting or deleting each other's files in a publicly accessible directory, such as /tmp. (The owner of the directory or root can, of course, delete or rename files there.) -O -> you are owner of file -G -> group-id of file same as yours -N -> file modified since it was last read f1 -nt f2 file f1 is newer than f2 f1 -ot f2 file f1 is older than f2 f1 -ef f2 files f1 and f2 are hard links to the same file ! "not" -- reverses the sense of the tests above (returns true if condition absent).
input {
stdin {
codec => 'plain'
}
}
output {
stdout {
codec => 'rubydebug'
}
}
## logstash.conf input { tcp { port => '5300' } } output { stdout { codec => 'rubydebug' } }
logstash -f logstash.conf --configtest
logstash -f logstash.conf
@timestamp: An ISO 8601 timestamp.
message: The event's message.
@version: the version of the event format. The current version is 1.
host: host from which the message / event's was sent.
port: port of the client.
filter { grok { ## grok filter plugin } }
[20-JUN-2016 14:00:23 UTC] Received Message [20-JUN-2016 14:00:24 UTC] Before query the IP Address (1.0ms) SELECT "ip_addresses"."address" FROM "ip_addresses" WHERE "ip_addresses"."resporg_accnt_id" = 3 [20-JUN-2016 14:00:24 UTC] After query the IP Address [20-JUN-2016 14:00:24 UTC] The Ip address found is X.X.X.X [20-JUN-2016 14:00:27 UTC] Quering ResporgID ResporgAccountId Load (2.0ms) SELECT resporg_account_ids.*, tfxc_fees.fee as fee FROM "resporg_account_ids" AS resporg_account_ids LEFT JOIN ip_addresses ON resporg_account_ids.id = ip_addresses.resporg_accnt_id LEFT JOIN tfxc_fees ON resporg_account_ids.id = tfxc_fees.resporg_account_id_id WHERE "resporg_account_ids"."active" = 't' AND (((ip_addresses.address = 'x.x.x.x' AND ip_addresses.reserve = 't') AND ('x.x.x.x' = ANY (origin_sip_trunk_ip))) OR (resporg_account_ids.resporg_account_id = 'XXXX') OR (resporg_account_ids.resporg_account_id = 'XXXX')) [20-JUN-2016 14:00:27] Resporg ID is TIN [20-JUN-2016 14:00:29 UTC] Querying Freeswitchinstance FreeswitchInstance Load (1.0ms) SELECT "freeswitch_instances".* FROM "freeswitch_instances" WHERE "freeswitch_instances"."state" = 'active' ORDER BY "freeswitch_instances"."calls_count" ASC, "freeswitch_instances"."average_system_load" ASC LIMIT 1 [20-JUN-2016 14:00:29 UTC] FreeswitchInstance is IronMan. [20-JUN-2016 14:00:29 UTC] Get the individual rate IndividualCeilingRate Load (0.0ms) SELECT "individual_ceiling_rates".* FROM "individual_ceiling_rates" WHERE "individual_ceiling_rates"."resporg_account_id_id" = 7 AND "individual_ceiling_rates"."originating_resporg_id" = 3 LIMIT 1 [20-JUN-2016 14:00:29 UTC] The individual rate is 20 [20-JUN-2016 14:00:30 UTC] Query the individual rate Rate Load (1.0ms) SELECT "rates".* FROM "rates" WHERE "rates"."resporg_account_id_id" = 3 LIMIT 1 [20-JUN-2016 14:00:30 UTC] The Selected rate is 40
## input the log event via TCP. input { tcp { port => '5300' } } filter { ## apply this filter only to log event of type custom if ([type] == "custom") { grok { ## load your custom regex pattern patterns_dir => "./pattern" ## Compare the message with the you applied regex match => { "message" => "%{ARSQL:sql}" } ## if the message matched the given regex apply a field called "grok" match add_field => {"grok" => "match"} } ## if the field has a grok match, which means that above regex match if ([grok] == 'match') { ## apply mutate filter plugin to replace the type from CUSTOM to SQL mutate { replace => {"type" => "sql"} ## remove the grok field that was added in the earlier filter remove_field => ["grok"] } } } } ## output plugin. For now we will be using rubydebug but we can every easily used any of the output plugin output { stdout { codec => 'rubydebug' } }
{ "message" => "Received Message", "@version" => "1", "@timestamp" => "2016-06-20T14:00:23.320Z", "host" => "werain", "type" => "custom" ## custom tag } { "message" => "(1.0ms) SELECT "ip_addresses"."address" FROM "ip_addresses" WHERE "ip_addresses"."resporg_accnt_id" = 3 ", "@version" => "1", "@timestamp" => "2016-06-20T14:00:24.520Z", "host" => "werain", "type" => "sql" ## we have successfully managed to change the type to sql(from custom) based ## on the grok regex filteration }
Welcome to the what did I learn today series. The intention of this blog spot is to compose the stuff that I learnt day-to-day basics and jo...