- Notifications
You must be signed in to change notification settings - Fork49
Elixir implementation of Algolia search API
License
sikanhe/algolia-elixir
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is the elixir implementation of Algolia search API, it is purely functional
Add to your dependencies
defpdepsdo[{:algolia,"~> 0.8.0"}]end
(Pre-Elixir-1.4) Add :algolia to your applications
defapplicationdo[applications:[:algolia]]end
ALGOLIA_APPLICATION_ID=YOUR_APPLICATION_IDALGOLIA_API_KEY=YOUR_API_KEY
config :algolia, application_id: YOUR_APPLICATION_ID, api_key: YOUR_API_KEY
NOTE: You must use ADMIN API_KEY instead of SEARCH API_KEY to enable write access
You don't need to initiate an index with this client unlike other OO Algolia clients.However, Most of the client search/write functions all use the syntax
operation(index, args....)
So you can easy emulate the index.function() syntax using piping
"my_index" |> operation(args)
All functions are serialized into maps before returning these responses
{:ok, response}
{:error, error_code, response}
{:error, "Cannot connect to Algolia"}
: The client implements retrystrategy on all Algolia hosts with increasing timeout, It should onlyreturn this error when it has tried all 4 hosts.More Details here.
"my_index"|>search("some query")
With Options
"my_index"|>search("some query",[attributesToRetrieve:"firstname",hitsPerPage:20])
See all available search optionshere
multi([%{index_name=>"my_index1",query:"search query"},%{index_name=>"my_index2",query:"another query",hitsPerPage:3,},%{index_name=>"my_index3",query:"3rd query",tagFilters:"promotion"}])
You can specify a strategy to optimize your multiple queries
:none
: Execute the sequence of queries until the end.stop_if_enough_matches
: Execute the sequence of queries until the number of hits is reached by the sum of hits.
multi([query1,query2],strategy::stop_if_enough_matches)
Allsave_*
operations will overrides the object at the objectID
Save a single object to index without specifying objectID, must have objectIDinside object, or use theid_attribute
option (see below)
"my_index"|>save_object(%{objectID:"1"})
Save a single object with a given objectID
"my_index"|>save_object(%{title:"hello"},"12345")
Save multiple objects to an index
"my_index"|>save_objects([%{objectID:"1"},%{objectID:"2"}])
Partially updates a single object
"my_index"|>partial_update_object(%{title:"hello"},"12345")
Update multiple objects, must have objectID in each object, or use theid_attribute
option (see below)
"my_index"|>partial_update_objects([%{objectID:"1"},%{objectID:"2"}])
Partial update by default creates a new object if an object does not exist at theobjectID, you can turn this off by passingfalse
to the:upsert?
option
"my_index"|>partial_update_object(%{title:"hello"},"12345",upsert?:false)"my_index"|>partial_update_objects([%{id:"1"},%{id:"2"}],id_attribute::id,upsert?:false)
All write functions such assave_object
andpartial_update_object
comes with anid_attribute
option that lets the you specifying an objectID from an existing field in the object, so you do nothave to generate it yourself
"my_index"|>save_object(%{id:"2"},id_attribute::id)
It also works for batch operations, such assave_objects
andpartial_update_objects
"my_index"|>save_objects([%{id:"1"},%{id:"2"}],id_attribute::id)
However, this cannot be used together with an ID specifying argument together
"my_index"|>save_object(%{id:"1234"},"1234",id_attribute::id)>Error
All write operations can be waited on by simply piping the response into wait/1
"my_index"|>save_object(%{id:"123"})|>wait
Since the client polls the server to check for publishing status,You can specify a time between each tick of the poll, the default is 1000 ms
"my_index"|>save_object(%{id:"123"})|>wait(2_000)
You can also use the underlying wait_task function explicitly
{:ok,%{"taskID"=>task_id,"indexName"=>index}}="my_index"|>save_object(%{id:"123"} wait(index,task_id)
or with option
wait(index,task_id,2_000)
list_indexes()
Moves an index to a new one
move_index(source_index,destination_index)
Copies an index to a new one
copy_index(source_index,destination_index)
clear_index(index)
get_settings(index)
Example response
{:ok,%{"minWordSizefor1Typo"=>4,"minWordSizefor2Typos"=>8,"hitsPerPage"=>20,"attributesToIndex"=>nil,"attributesToRetrieve"=>nil,"attributesToSnippet"=>nil,"attributesToHighlight"=>nil,"ranking"=>["typo","geo","words","proximity","attribute","exact","custom"],"customRanking"=>nil,"separatorsToIndex"=>"","queryType"=>"prefixAll"}}
set_settings(index,%{"hitsPerPage"=>20})>%{"updatedAt"=>"2013-08-21T13:20:18.960Z","taskID"=>10210332."indexName"=>"my_index"}
- get_object
- save_object
- save_objects
- update_object
- partial_update_object
- partial_update_objects
- delete_object
- delete_objects
- list_indexes
- clear_index
- wait_task
- wait (convenience function for piping response into wait_task)
- set_settings
- get_settings
- list_user_keys
- get_user_key
- add_user_key
- update_user_key
- delete_user_key
About
Elixir implementation of Algolia search API
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors10
Uh oh!
There was an error while loading.Please reload this page.