Api execution process speed up,

I am running the API action via python. The flow goes this way.

  1. Get the status of the server if its up or not.
    ( delay of 30sec)
  2. Execute the action using taskid and get the actionid
    ( delay of 60sec)
  3. Execute the query to get the result
  4. Status of the query result.
    (delay of 30sec)
  5. Status of the actionID

I have to put the delay between the step 1/2/4 to let the API to execute the task. If i don’t put the delay I don’t get the result back becuase bigfix console is still spawing the ActionID, and the takes some time.

is there any way to speed up the process of execution in the API to cut down the delay between the steps.?

I don’t think there is an issue with API execution performance here, but just the reality of how actions are deployed and reported on within BigFix.

You should be able to get rid of the 30 sec delay between steps 1 & 2, I’m not sure what that is providing unless it is a retry delay for when the server is not running.

At step 2, this is actually generating the action and distributing it through the infrastructure to the targeted endpoints. The API doesn’t change how this process works, the same thing would occur if you deploy an action in the console. It needs time to be gathered and evaluated by the endpoints, before they can report back any status. The default minimum reporting interval for clients is also 60 sec for most deployments, so you shouldn’t expect to get a response back much faster than a minute.

I’m not sure what the difference is between steps 3 & 4, but depending on the size of the download and runtime of the action being deployed, it could take several minutes to complete on each client, so you would need to keep querying for action results (with some reasonable delay) until every targeted client reports its final status (usually Fixed or Failed).

IMHO, expecting an action to be sent to, executed on, and results reported back from a remote endpoint in less than 60 seconds is not reasonable for BigFix or any similar product.

I agreed with your assessment. What i did in terms of executing the task per server one at a time . I have created thread which is executing the same taskID against 4-5 server using python/threaded module.
And i have put the “action status” in the while loop with 30sec delay until I get the completion status.

This is thread which calls the function.

t = threading.Thread(target=call_action, args=(computer_name.strip(),cmd_argu))
time.sleep(1)
threads.append(t)
t.start()

And function wait till it gets the actions status

1 Like